Test Report: KVM_Linux_crio 17340

                    
                      49babfe4fcdff3bcc398a25366bae00d3ae6dc66:2023-10-02:31256
                    
                

Test fail (27/288)

Order failed test Duration
25 TestAddons/parallel/Ingress 160.88
37 TestAddons/StoppedEnableDisable 155.5
153 TestIngressAddonLegacy/serial/ValidateIngressAddons 173.47
201 TestMultiNode/serial/PingHostFrom2Pods 3.27
207 TestMultiNode/serial/RestartKeepsNodes 686.11
209 TestMultiNode/serial/StopMultiNode 143.6
216 TestPreload 280.2
222 TestRunningBinaryUpgrade 144.42
245 TestNoKubernetes/serial/StartNoArgs 44.89
247 TestStoppedBinaryUpgrade/Upgrade 286.53
258 TestPause/serial/SecondStartNoReconfiguration 50.7
314 TestStartStop/group/no-preload/serial/Stop 139.27
317 TestStartStop/group/old-k8s-version/serial/Stop 139.69
320 TestStartStop/group/embed-certs/serial/Stop 140.24
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.9
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
326 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.32
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.23
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.32
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.39
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 417.36
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 381.68
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 308.1
339 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 241.84
x
+
TestAddons/parallel/Ingress (160.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-304007 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-304007 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-304007 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [022886fc-f9e9-4e77-ba67-52cd421e8921] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [022886fc-f9e9-4e77-ba67-52cd421e8921] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.035813938s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-304007 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.981267326s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context addons-304007 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.235
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p addons-304007 addons disable ingress-dns --alsologtostderr -v=1: (1.061449021s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p addons-304007 addons disable ingress --alsologtostderr -v=1: (7.779390257s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-304007 -n addons-304007
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-304007 logs -n 25: (1.441066167s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |                     |
	|         | -p download-only-752606                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |                     |
	|         | -p download-only-752606                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| delete  | -p download-only-752606                                                                     | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| delete  | -p download-only-752606                                                                     | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-775199 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |                     |
	|         | binary-mirror-775199                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34689                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-775199                                                                     | binary-mirror-775199 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:36 UTC |
	| start   | -p addons-304007 --wait=true                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC | 02 Oct 23 10:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | -p addons-304007                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-304007 addons                                                                        | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-304007 ssh cat                                                                       | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | /opt/local-path-provisioner/pvc-d402bdb5-3384-475e-b837-b98b15392ced_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-304007 addons disable                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-304007 ip                                                                            | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	| addons  | addons-304007 addons disable                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | addons-304007                                                                               |                      |         |         |                     |                     |
	| addons  | addons-304007 addons disable                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC | 02 Oct 23 10:39 UTC |
	|         | addons-304007                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-304007 ssh curl -s                                                                   | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-304007 addons                                                                        | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:40 UTC | 02 Oct 23 10:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-304007 addons                                                                        | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:40 UTC | 02 Oct 23 10:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-304007 ip                                                                            | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:42 UTC | 02 Oct 23 10:42 UTC |
	| addons  | addons-304007 addons disable                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:42 UTC | 02 Oct 23 10:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-304007 addons disable                                                                | addons-304007        | jenkins | v1.31.2 | 02 Oct 23 10:42 UTC | 02 Oct 23 10:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:36:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:36:31.150952  340248 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:36:31.151226  340248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:31.151237  340248 out.go:309] Setting ErrFile to fd 2...
	I1002 10:36:31.151244  340248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:31.151471  340248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 10:36:31.152122  340248 out.go:303] Setting JSON to false
	I1002 10:36:31.153210  340248 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4737,"bootTime":1696238254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:36:31.153330  340248 start.go:138] virtualization: kvm guest
	I1002 10:36:31.208409  340248 out.go:177] * [addons-304007] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:36:31.271612  340248 notify.go:220] Checking for updates...
	I1002 10:36:31.271632  340248 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:36:31.334322  340248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:36:31.396279  340248 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:36:31.458740  340248 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:36:31.521313  340248 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 10:36:31.534194  340248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:36:31.535807  340248 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:36:31.567003  340248 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 10:36:31.568484  340248 start.go:298] selected driver: kvm2
	I1002 10:36:31.568494  340248 start.go:902] validating driver "kvm2" against <nil>
	I1002 10:36:31.568515  340248 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:36:31.569195  340248 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:36:31.569279  340248 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 10:36:31.583234  340248 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 10:36:31.583287  340248 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:36:31.583488  340248 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:36:31.583543  340248 cni.go:84] Creating CNI manager for ""
	I1002 10:36:31.583564  340248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:36:31.583583  340248 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 10:36:31.583595  340248 start_flags.go:321] config:
	{Name:addons-304007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-304007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:31.583763  340248 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:36:31.585495  340248 out.go:177] * Starting control plane node addons-304007 in cluster addons-304007
	I1002 10:36:31.586745  340248 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 10:36:31.586784  340248 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 10:36:31.586797  340248 cache.go:57] Caching tarball of preloaded images
	I1002 10:36:31.586888  340248 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 10:36:31.586900  340248 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 10:36:31.587206  340248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/config.json ...
	I1002 10:36:31.587234  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/config.json: {Name:mk01fc84a093719a3b6eef0c4c75117026b760bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:31.587374  340248 start.go:365] acquiring machines lock for addons-304007: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 10:36:31.587434  340248 start.go:369] acquired machines lock for "addons-304007" in 44.456µs
	I1002 10:36:31.587458  340248 start.go:93] Provisioning new machine with config: &{Name:addons-304007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:addons-304007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 10:36:31.587552  340248 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 10:36:31.589173  340248 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1002 10:36:31.589328  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:36:31.589377  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:36:31.603001  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36533
	I1002 10:36:31.603475  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:36:31.603996  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:36:31.604026  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:36:31.604414  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:36:31.604613  340248 main.go:141] libmachine: (addons-304007) Calling .GetMachineName
	I1002 10:36:31.604785  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:36:31.604926  340248 start.go:159] libmachine.API.Create for "addons-304007" (driver="kvm2")
	I1002 10:36:31.604997  340248 client.go:168] LocalClient.Create starting
	I1002 10:36:31.605044  340248 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 10:36:31.659545  340248 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 10:36:31.748411  340248 main.go:141] libmachine: Running pre-create checks...
	I1002 10:36:31.748440  340248 main.go:141] libmachine: (addons-304007) Calling .PreCreateCheck
	I1002 10:36:31.748998  340248 main.go:141] libmachine: (addons-304007) Calling .GetConfigRaw
	I1002 10:36:31.749487  340248 main.go:141] libmachine: Creating machine...
	I1002 10:36:31.749505  340248 main.go:141] libmachine: (addons-304007) Calling .Create
	I1002 10:36:31.749661  340248 main.go:141] libmachine: (addons-304007) Creating KVM machine...
	I1002 10:36:31.750903  340248 main.go:141] libmachine: (addons-304007) DBG | found existing default KVM network
	I1002 10:36:31.751659  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:31.751503  340269 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a30}
	I1002 10:36:31.757091  340248 main.go:141] libmachine: (addons-304007) DBG | trying to create private KVM network mk-addons-304007 192.168.39.0/24...
	I1002 10:36:31.823295  340248 main.go:141] libmachine: (addons-304007) DBG | private KVM network mk-addons-304007 192.168.39.0/24 created
	I1002 10:36:31.823332  340248 main.go:141] libmachine: (addons-304007) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007 ...
	I1002 10:36:31.823343  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:31.823238  340269 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:36:31.823429  340248 main.go:141] libmachine: (addons-304007) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 10:36:31.823522  340248 main.go:141] libmachine: (addons-304007) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 10:36:32.070036  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:32.069860  340269 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa...
	I1002 10:36:32.227155  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:32.227008  340269 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/addons-304007.rawdisk...
	I1002 10:36:32.227187  340248 main.go:141] libmachine: (addons-304007) DBG | Writing magic tar header
	I1002 10:36:32.227199  340248 main.go:141] libmachine: (addons-304007) DBG | Writing SSH key tar header
	I1002 10:36:32.227217  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:32.227152  340269 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007 ...
	I1002 10:36:32.227395  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007
	I1002 10:36:32.227420  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 10:36:32.227440  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007 (perms=drwx------)
	I1002 10:36:32.227463  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 10:36:32.227479  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:36:32.227487  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 10:36:32.227497  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 10:36:32.227506  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 10:36:32.227517  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 10:36:32.227532  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 10:36:32.227549  340248 main.go:141] libmachine: (addons-304007) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 10:36:32.227560  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home/jenkins
	I1002 10:36:32.227574  340248 main.go:141] libmachine: (addons-304007) DBG | Checking permissions on dir: /home
	I1002 10:36:32.227583  340248 main.go:141] libmachine: (addons-304007) Creating domain...
	I1002 10:36:32.227593  340248 main.go:141] libmachine: (addons-304007) DBG | Skipping /home - not owner
	I1002 10:36:32.228582  340248 main.go:141] libmachine: (addons-304007) define libvirt domain using xml: 
	I1002 10:36:32.228605  340248 main.go:141] libmachine: (addons-304007) <domain type='kvm'>
	I1002 10:36:32.228613  340248 main.go:141] libmachine: (addons-304007)   <name>addons-304007</name>
	I1002 10:36:32.228622  340248 main.go:141] libmachine: (addons-304007)   <memory unit='MiB'>4000</memory>
	I1002 10:36:32.228628  340248 main.go:141] libmachine: (addons-304007)   <vcpu>2</vcpu>
	I1002 10:36:32.228639  340248 main.go:141] libmachine: (addons-304007)   <features>
	I1002 10:36:32.228649  340248 main.go:141] libmachine: (addons-304007)     <acpi/>
	I1002 10:36:32.228658  340248 main.go:141] libmachine: (addons-304007)     <apic/>
	I1002 10:36:32.228667  340248 main.go:141] libmachine: (addons-304007)     <pae/>
	I1002 10:36:32.228675  340248 main.go:141] libmachine: (addons-304007)     
	I1002 10:36:32.228685  340248 main.go:141] libmachine: (addons-304007)   </features>
	I1002 10:36:32.228699  340248 main.go:141] libmachine: (addons-304007)   <cpu mode='host-passthrough'>
	I1002 10:36:32.228708  340248 main.go:141] libmachine: (addons-304007)   
	I1002 10:36:32.228725  340248 main.go:141] libmachine: (addons-304007)   </cpu>
	I1002 10:36:32.228738  340248 main.go:141] libmachine: (addons-304007)   <os>
	I1002 10:36:32.228749  340248 main.go:141] libmachine: (addons-304007)     <type>hvm</type>
	I1002 10:36:32.228760  340248 main.go:141] libmachine: (addons-304007)     <boot dev='cdrom'/>
	I1002 10:36:32.228770  340248 main.go:141] libmachine: (addons-304007)     <boot dev='hd'/>
	I1002 10:36:32.228784  340248 main.go:141] libmachine: (addons-304007)     <bootmenu enable='no'/>
	I1002 10:36:32.228799  340248 main.go:141] libmachine: (addons-304007)   </os>
	I1002 10:36:32.228812  340248 main.go:141] libmachine: (addons-304007)   <devices>
	I1002 10:36:32.228836  340248 main.go:141] libmachine: (addons-304007)     <disk type='file' device='cdrom'>
	I1002 10:36:32.228851  340248 main.go:141] libmachine: (addons-304007)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/boot2docker.iso'/>
	I1002 10:36:32.228859  340248 main.go:141] libmachine: (addons-304007)       <target dev='hdc' bus='scsi'/>
	I1002 10:36:32.228866  340248 main.go:141] libmachine: (addons-304007)       <readonly/>
	I1002 10:36:32.228873  340248 main.go:141] libmachine: (addons-304007)     </disk>
	I1002 10:36:32.228880  340248 main.go:141] libmachine: (addons-304007)     <disk type='file' device='disk'>
	I1002 10:36:32.228891  340248 main.go:141] libmachine: (addons-304007)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 10:36:32.228901  340248 main.go:141] libmachine: (addons-304007)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/addons-304007.rawdisk'/>
	I1002 10:36:32.228910  340248 main.go:141] libmachine: (addons-304007)       <target dev='hda' bus='virtio'/>
	I1002 10:36:32.228917  340248 main.go:141] libmachine: (addons-304007)     </disk>
	I1002 10:36:32.228944  340248 main.go:141] libmachine: (addons-304007)     <interface type='network'>
	I1002 10:36:32.228970  340248 main.go:141] libmachine: (addons-304007)       <source network='mk-addons-304007'/>
	I1002 10:36:32.228982  340248 main.go:141] libmachine: (addons-304007)       <model type='virtio'/>
	I1002 10:36:32.228995  340248 main.go:141] libmachine: (addons-304007)     </interface>
	I1002 10:36:32.229009  340248 main.go:141] libmachine: (addons-304007)     <interface type='network'>
	I1002 10:36:32.229021  340248 main.go:141] libmachine: (addons-304007)       <source network='default'/>
	I1002 10:36:32.229036  340248 main.go:141] libmachine: (addons-304007)       <model type='virtio'/>
	I1002 10:36:32.229052  340248 main.go:141] libmachine: (addons-304007)     </interface>
	I1002 10:36:32.229070  340248 main.go:141] libmachine: (addons-304007)     <serial type='pty'>
	I1002 10:36:32.229083  340248 main.go:141] libmachine: (addons-304007)       <target port='0'/>
	I1002 10:36:32.229096  340248 main.go:141] libmachine: (addons-304007)     </serial>
	I1002 10:36:32.229108  340248 main.go:141] libmachine: (addons-304007)     <console type='pty'>
	I1002 10:36:32.229143  340248 main.go:141] libmachine: (addons-304007)       <target type='serial' port='0'/>
	I1002 10:36:32.229163  340248 main.go:141] libmachine: (addons-304007)     </console>
	I1002 10:36:32.229177  340248 main.go:141] libmachine: (addons-304007)     <rng model='virtio'>
	I1002 10:36:32.229186  340248 main.go:141] libmachine: (addons-304007)       <backend model='random'>/dev/random</backend>
	I1002 10:36:32.229195  340248 main.go:141] libmachine: (addons-304007)     </rng>
	I1002 10:36:32.229201  340248 main.go:141] libmachine: (addons-304007)     
	I1002 10:36:32.229209  340248 main.go:141] libmachine: (addons-304007)     
	I1002 10:36:32.229214  340248 main.go:141] libmachine: (addons-304007)   </devices>
	I1002 10:36:32.229222  340248 main.go:141] libmachine: (addons-304007) </domain>
	I1002 10:36:32.229234  340248 main.go:141] libmachine: (addons-304007) 
	I1002 10:36:32.234932  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:15:41:e4 in network default
	I1002 10:36:32.235496  340248 main.go:141] libmachine: (addons-304007) Ensuring networks are active...
	I1002 10:36:32.235518  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:32.236154  340248 main.go:141] libmachine: (addons-304007) Ensuring network default is active
	I1002 10:36:32.236453  340248 main.go:141] libmachine: (addons-304007) Ensuring network mk-addons-304007 is active
	I1002 10:36:32.236980  340248 main.go:141] libmachine: (addons-304007) Getting domain xml...
	I1002 10:36:32.237549  340248 main.go:141] libmachine: (addons-304007) Creating domain...
	I1002 10:36:33.652416  340248 main.go:141] libmachine: (addons-304007) Waiting to get IP...
	I1002 10:36:33.654580  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:33.655040  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:33.655076  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:33.655017  340269 retry.go:31] will retry after 228.998267ms: waiting for machine to come up
	I1002 10:36:33.885609  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:33.886036  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:33.886071  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:33.885963  340269 retry.go:31] will retry after 289.500284ms: waiting for machine to come up
	I1002 10:36:34.177410  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:34.177875  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:34.177908  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:34.177814  340269 retry.go:31] will retry after 360.903587ms: waiting for machine to come up
	I1002 10:36:34.540341  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:34.540764  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:34.540798  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:34.540699  340269 retry.go:31] will retry after 500.977212ms: waiting for machine to come up
	I1002 10:36:35.043397  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:35.043778  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:35.043818  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:35.043770  340269 retry.go:31] will retry after 758.33959ms: waiting for machine to come up
	I1002 10:36:35.803702  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:35.804174  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:35.804202  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:35.804123  340269 retry.go:31] will retry after 786.052968ms: waiting for machine to come up
	I1002 10:36:36.592145  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:36.592551  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:36.592607  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:36.592509  340269 retry.go:31] will retry after 1.047383405s: waiting for machine to come up
	I1002 10:36:37.641364  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:37.641833  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:37.641865  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:37.641781  340269 retry.go:31] will retry after 1.005062961s: waiting for machine to come up
	I1002 10:36:38.648965  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:38.649353  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:38.649384  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:38.649294  340269 retry.go:31] will retry after 1.467313676s: waiting for machine to come up
	I1002 10:36:40.118969  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:40.119437  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:40.119467  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:40.119363  340269 retry.go:31] will retry after 2.247938882s: waiting for machine to come up
	I1002 10:36:42.368588  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:42.368975  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:42.369010  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:42.368913  340269 retry.go:31] will retry after 2.836534663s: waiting for machine to come up
	I1002 10:36:45.206647  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:45.207178  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:45.207208  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:45.207128  340269 retry.go:31] will retry after 3.129807795s: waiting for machine to come up
	I1002 10:36:48.339133  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:48.339592  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:48.339630  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:48.339512  340269 retry.go:31] will retry after 3.4310897s: waiting for machine to come up
	I1002 10:36:51.771871  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:51.772295  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find current IP address of domain addons-304007 in network mk-addons-304007
	I1002 10:36:51.772325  340248 main.go:141] libmachine: (addons-304007) DBG | I1002 10:36:51.772230  340269 retry.go:31] will retry after 4.811841071s: waiting for machine to come up
	I1002 10:36:56.589088  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:56.589537  340248 main.go:141] libmachine: (addons-304007) Found IP for machine: 192.168.39.235
	I1002 10:36:56.589564  340248 main.go:141] libmachine: (addons-304007) Reserving static IP address...
	I1002 10:36:56.589583  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has current primary IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:56.590021  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find host DHCP lease matching {name: "addons-304007", mac: "52:54:00:49:b6:9e", ip: "192.168.39.235"} in network mk-addons-304007
	I1002 10:36:56.661268  340248 main.go:141] libmachine: (addons-304007) DBG | Getting to WaitForSSH function...
	I1002 10:36:56.661306  340248 main.go:141] libmachine: (addons-304007) Reserved static IP address: 192.168.39.235
	I1002 10:36:56.661321  340248 main.go:141] libmachine: (addons-304007) Waiting for SSH to be available...
	I1002 10:36:56.663580  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:56.663944  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007
	I1002 10:36:56.663978  340248 main.go:141] libmachine: (addons-304007) DBG | unable to find defined IP address of network mk-addons-304007 interface with MAC address 52:54:00:49:b6:9e
	I1002 10:36:56.664101  340248 main.go:141] libmachine: (addons-304007) DBG | Using SSH client type: external
	I1002 10:36:56.664133  340248 main.go:141] libmachine: (addons-304007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa (-rw-------)
	I1002 10:36:56.664176  340248 main.go:141] libmachine: (addons-304007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 10:36:56.664197  340248 main.go:141] libmachine: (addons-304007) DBG | About to run SSH command:
	I1002 10:36:56.664213  340248 main.go:141] libmachine: (addons-304007) DBG | exit 0
	I1002 10:36:56.667794  340248 main.go:141] libmachine: (addons-304007) DBG | SSH cmd err, output: exit status 255: 
	I1002 10:36:56.667818  340248 main.go:141] libmachine: (addons-304007) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1002 10:36:56.667827  340248 main.go:141] libmachine: (addons-304007) DBG | command : exit 0
	I1002 10:36:56.667832  340248 main.go:141] libmachine: (addons-304007) DBG | err     : exit status 255
	I1002 10:36:56.667840  340248 main.go:141] libmachine: (addons-304007) DBG | output  : 
	I1002 10:36:59.669971  340248 main.go:141] libmachine: (addons-304007) DBG | Getting to WaitForSSH function...
	I1002 10:36:59.672613  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.672967  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:36:59.673002  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.673212  340248 main.go:141] libmachine: (addons-304007) DBG | Using SSH client type: external
	I1002 10:36:59.673241  340248 main.go:141] libmachine: (addons-304007) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa (-rw-------)
	I1002 10:36:59.673275  340248 main.go:141] libmachine: (addons-304007) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 10:36:59.673298  340248 main.go:141] libmachine: (addons-304007) DBG | About to run SSH command:
	I1002 10:36:59.673314  340248 main.go:141] libmachine: (addons-304007) DBG | exit 0
	I1002 10:36:59.766105  340248 main.go:141] libmachine: (addons-304007) DBG | SSH cmd err, output: <nil>: 
	I1002 10:36:59.766430  340248 main.go:141] libmachine: (addons-304007) KVM machine creation complete!
	I1002 10:36:59.766801  340248 main.go:141] libmachine: (addons-304007) Calling .GetConfigRaw
	I1002 10:36:59.767372  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:36:59.767591  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:36:59.767755  340248 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 10:36:59.767773  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:36:59.768963  340248 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 10:36:59.768978  340248 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 10:36:59.768985  340248 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 10:36:59.768991  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:36:59.771458  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.771843  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:36:59.771872  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.771990  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:36:59.772140  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:36:59.772282  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:36:59.772416  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:36:59.772587  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:59.773000  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:36:59.773017  340248 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 10:36:59.897363  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:36:59.897394  340248 main.go:141] libmachine: Detecting the provisioner...
	I1002 10:36:59.897403  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:36:59.900230  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.900639  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:36:59.900673  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:36:59.900798  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:36:59.901007  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:36:59.901167  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:36:59.901317  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:36:59.901538  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:36:59.901858  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:36:59.901870  340248 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 10:37:00.027142  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 10:37:00.027277  340248 main.go:141] libmachine: found compatible host: buildroot
	I1002 10:37:00.027295  340248 main.go:141] libmachine: Provisioning with buildroot...
	I1002 10:37:00.027309  340248 main.go:141] libmachine: (addons-304007) Calling .GetMachineName
	I1002 10:37:00.027586  340248 buildroot.go:166] provisioning hostname "addons-304007"
	I1002 10:37:00.027615  340248 main.go:141] libmachine: (addons-304007) Calling .GetMachineName
	I1002 10:37:00.027828  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.030215  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.030599  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.030639  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.030737  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:00.030948  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.031175  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.031401  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:00.031608  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:37:00.031945  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:37:00.031964  340248 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-304007 && echo "addons-304007" | sudo tee /etc/hostname
	I1002 10:37:00.170589  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-304007
	
	I1002 10:37:00.170617  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.173535  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.173928  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.173974  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.174153  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:00.174401  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.174597  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.174757  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:00.174925  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:37:00.175325  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:37:00.175345  340248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-304007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-304007/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-304007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:37:00.310499  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:37:00.310535  340248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 10:37:00.310585  340248 buildroot.go:174] setting up certificates
	I1002 10:37:00.310621  340248 provision.go:83] configureAuth start
	I1002 10:37:00.310637  340248 main.go:141] libmachine: (addons-304007) Calling .GetMachineName
	I1002 10:37:00.310957  340248 main.go:141] libmachine: (addons-304007) Calling .GetIP
	I1002 10:37:00.313312  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.313691  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.313718  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.313822  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.315811  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.316086  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.316109  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.316249  340248 provision.go:138] copyHostCerts
	I1002 10:37:00.316324  340248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 10:37:00.316429  340248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 10:37:00.316490  340248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 10:37:00.316534  340248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.addons-304007 san=[192.168.39.235 192.168.39.235 localhost 127.0.0.1 minikube addons-304007]
	I1002 10:37:00.457361  340248 provision.go:172] copyRemoteCerts
	I1002 10:37:00.457422  340248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:37:00.457450  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.460066  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.460448  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.460494  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.460690  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:00.460901  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.461061  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:00.461191  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:00.555602  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:37:00.578979  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 10:37:00.601900  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 10:37:00.624364  340248 provision.go:86] duration metric: configureAuth took 313.723622ms
	I1002 10:37:00.624396  340248 buildroot.go:189] setting minikube options for container-runtime
	I1002 10:37:00.624621  340248 config.go:182] Loaded profile config "addons-304007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 10:37:00.624703  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.627303  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.627821  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.627873  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.627993  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:00.628197  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.628358  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.628492  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:00.628708  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:37:00.629015  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:37:00.629032  340248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 10:37:00.932927  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 10:37:00.932957  340248 main.go:141] libmachine: Checking connection to Docker...
	I1002 10:37:00.932970  340248 main.go:141] libmachine: (addons-304007) Calling .GetURL
	I1002 10:37:00.934237  340248 main.go:141] libmachine: (addons-304007) DBG | Using libvirt version 6000000
	I1002 10:37:00.936163  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.936492  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.936526  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.936659  340248 main.go:141] libmachine: Docker is up and running!
	I1002 10:37:00.936682  340248 main.go:141] libmachine: Reticulating splines...
	I1002 10:37:00.936691  340248 client.go:171] LocalClient.Create took 29.331681309s
	I1002 10:37:00.936721  340248 start.go:167] duration metric: libmachine.API.Create for "addons-304007" took 29.331797606s
	I1002 10:37:00.936733  340248 start.go:300] post-start starting for "addons-304007" (driver="kvm2")
	I1002 10:37:00.936773  340248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:37:00.936813  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:00.937066  340248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:37:00.937097  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:00.939122  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.939380  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:00.939409  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:00.939533  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:00.939722  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:00.939890  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:00.940043  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:01.032832  340248 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:37:01.037497  340248 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 10:37:01.037525  340248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 10:37:01.037595  340248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 10:37:01.037618  340248 start.go:303] post-start completed in 100.877621ms
	I1002 10:37:01.037654  340248 main.go:141] libmachine: (addons-304007) Calling .GetConfigRaw
	I1002 10:37:01.038247  340248 main.go:141] libmachine: (addons-304007) Calling .GetIP
	I1002 10:37:01.040584  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.040884  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:01.040919  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.041143  340248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/config.json ...
	I1002 10:37:01.041355  340248 start.go:128] duration metric: createHost completed in 29.453790353s
	I1002 10:37:01.041389  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:01.043359  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.043613  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:01.043638  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.043781  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:01.043996  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:01.044176  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:01.044313  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:01.044419  340248 main.go:141] libmachine: Using SSH client type: native
	I1002 10:37:01.044751  340248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I1002 10:37:01.044766  340248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 10:37:01.171182  340248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696243021.160617821
	
	I1002 10:37:01.171213  340248 fix.go:206] guest clock: 1696243021.160617821
	I1002 10:37:01.171221  340248 fix.go:219] Guest: 2023-10-02 10:37:01.160617821 +0000 UTC Remote: 2023-10-02 10:37:01.041373675 +0000 UTC m=+29.921625955 (delta=119.244146ms)
	I1002 10:37:01.171263  340248 fix.go:190] guest clock delta is within tolerance: 119.244146ms
	I1002 10:37:01.171268  340248 start.go:83] releasing machines lock for "addons-304007", held for 29.58382415s
	I1002 10:37:01.171292  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:01.171576  340248 main.go:141] libmachine: (addons-304007) Calling .GetIP
	I1002 10:37:01.174180  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.174556  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:01.174580  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.174739  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:01.175256  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:01.175493  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:01.175612  340248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:37:01.175677  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:01.175763  340248 ssh_runner.go:195] Run: cat /version.json
	I1002 10:37:01.175789  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:01.178055  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.178333  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.178410  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:01.178438  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.178565  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:01.178693  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:01.178727  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:01.178728  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:01.178887  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:01.178927  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:01.179057  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:01.179129  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:01.179211  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:01.179371  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:01.267115  340248 ssh_runner.go:195] Run: systemctl --version
	I1002 10:37:01.291553  340248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 10:37:01.444925  340248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 10:37:01.451288  340248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 10:37:01.451363  340248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:37:01.465090  340248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 10:37:01.465117  340248 start.go:469] detecting cgroup driver to use...
	I1002 10:37:01.465189  340248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 10:37:01.483069  340248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 10:37:01.495534  340248 docker.go:197] disabling cri-docker service (if available) ...
	I1002 10:37:01.495595  340248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 10:37:01.508507  340248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 10:37:01.523731  340248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 10:37:01.638112  340248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 10:37:01.748469  340248 docker.go:213] disabling docker service ...
	I1002 10:37:01.748572  340248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 10:37:01.762898  340248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 10:37:01.774174  340248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 10:37:01.878317  340248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 10:37:01.981185  340248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 10:37:01.993421  340248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:37:02.010823  340248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 10:37:02.010897  340248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:37:02.019681  340248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 10:37:02.019742  340248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:37:02.028445  340248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:37:02.037150  340248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:37:02.045711  340248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:37:02.054668  340248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:37:02.062484  340248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 10:37:02.062533  340248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 10:37:02.075193  340248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:37:02.083381  340248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:37:02.183400  340248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 10:37:02.368496  340248 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 10:37:02.368593  340248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 10:37:02.373380  340248 start.go:537] Will wait 60s for crictl version
	I1002 10:37:02.373451  340248 ssh_runner.go:195] Run: which crictl
	I1002 10:37:02.377391  340248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:37:02.420015  340248 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 10:37:02.420111  340248 ssh_runner.go:195] Run: crio --version
	I1002 10:37:02.462244  340248 ssh_runner.go:195] Run: crio --version
	I1002 10:37:02.511075  340248 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 10:37:02.512704  340248 main.go:141] libmachine: (addons-304007) Calling .GetIP
	I1002 10:37:02.515242  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:02.515618  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:02.515643  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:02.515859  340248 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 10:37:02.519956  340248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:37:02.532240  340248 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 10:37:02.532300  340248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 10:37:02.563832  340248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 10:37:02.563898  340248 ssh_runner.go:195] Run: which lz4
	I1002 10:37:02.567884  340248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 10:37:02.572023  340248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 10:37:02.572055  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 10:37:04.336059  340248 crio.go:444] Took 1.768200 seconds to copy over tarball
	I1002 10:37:04.336131  340248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 10:37:07.240311  340248 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.904149781s)
	I1002 10:37:07.240352  340248 crio.go:451] Took 2.904254 seconds to extract the tarball
	I1002 10:37:07.240364  340248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 10:37:07.280909  340248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 10:37:07.342283  340248 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 10:37:07.342307  340248 cache_images.go:84] Images are preloaded, skipping loading
	I1002 10:37:07.342376  340248 ssh_runner.go:195] Run: crio config
	I1002 10:37:07.398254  340248 cni.go:84] Creating CNI manager for ""
	I1002 10:37:07.398275  340248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:37:07.398319  340248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:37:07.398348  340248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-304007 NodeName:addons-304007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 10:37:07.398527  340248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-304007"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:37:07.398639  340248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-304007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-304007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:37:07.398719  340248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 10:37:07.407450  340248 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:37:07.407517  340248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:37:07.415701  340248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1002 10:37:07.429926  340248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 10:37:07.445230  340248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1002 10:37:07.460588  340248 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I1002 10:37:07.464163  340248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:37:07.476056  340248 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007 for IP: 192.168.39.235
	I1002 10:37:07.476091  340248 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.476270  340248 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 10:37:07.546687  340248 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt ...
	I1002 10:37:07.546722  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt: {Name:mk737228f7ea0d50935d40975263f6a83e9e6fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.546913  340248 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key ...
	I1002 10:37:07.546928  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key: {Name:mk5a8c77798e947c0d6a21e5744371cd54a4d369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.547030  340248 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 10:37:07.596044  340248 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt ...
	I1002 10:37:07.596080  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt: {Name:mkc690265ec677bdf6901042d8bd2a953780d4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.596256  340248 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key ...
	I1002 10:37:07.596271  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key: {Name:mkd46e03b92c10f1d8c39de47fe21fa32306639b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.596406  340248 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.key
	I1002 10:37:07.596428  340248 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt with IP's: []
	I1002 10:37:07.683544  340248 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt ...
	I1002 10:37:07.683580  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: {Name:mke710d8e464b0ad4f03898ec225fb8550601824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.683767  340248 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.key ...
	I1002 10:37:07.683784  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.key: {Name:mk46d7ed4f0bf1c156732e7c8afdcc0e2f5f5172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.683882  340248 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key.df2bd094
	I1002 10:37:07.683905  340248 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt.df2bd094 with IP's: [192.168.39.235 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 10:37:07.899727  340248 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt.df2bd094 ...
	I1002 10:37:07.899760  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt.df2bd094: {Name:mkd6a464e8d240d4ac428eeb829d5e30342b5314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.899955  340248 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key.df2bd094 ...
	I1002 10:37:07.899972  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key.df2bd094: {Name:mkef273af99078d6ade9368fc87681c56eb8d82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:07.900070  340248 certs.go:337] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt.df2bd094 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt
	I1002 10:37:07.900153  340248 certs.go:341] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key.df2bd094 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key
	I1002 10:37:07.900196  340248 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.key
	I1002 10:37:07.900214  340248 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.crt with IP's: []
	I1002 10:37:08.038315  340248 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.crt ...
	I1002 10:37:08.038348  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.crt: {Name:mkab4e90f32b56a3427628c2204260e2b4e8ff8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:08.038541  340248 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.key ...
	I1002 10:37:08.038556  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.key: {Name:mk9cd368b8aa5260276de7aab7712c12d7444abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:08.038896  340248 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 10:37:08.038946  340248 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:37:08.038974  340248 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:37:08.039011  340248 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 10:37:08.039735  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:37:08.067427  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 10:37:08.094021  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:37:08.119639  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 10:37:08.145942  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:37:08.171631  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 10:37:08.197476  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:37:08.223240  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 10:37:08.249215  340248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:37:08.274759  340248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:37:08.292372  340248 ssh_runner.go:195] Run: openssl version
	I1002 10:37:08.297800  340248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:37:08.307179  340248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:37:08.312060  340248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:37:08.312123  340248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:37:08.317642  340248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:37:08.327581  340248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:37:08.332109  340248 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:37:08.332160  340248 kubeadm.go:404] StartCluster: {Name:addons-304007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:addons-304007 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:37:08.332249  340248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 10:37:08.332321  340248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 10:37:08.382926  340248 cri.go:89] found id: ""
	I1002 10:37:08.382999  340248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:37:08.393361  340248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:37:08.402615  340248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:37:08.411560  340248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 10:37:08.411605  340248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 10:37:08.469900  340248 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 10:37:08.470020  340248 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 10:37:08.633835  340248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 10:37:08.634004  340248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 10:37:08.634174  340248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 10:37:08.847120  340248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:37:08.952668  340248 out.go:204]   - Generating certificates and keys ...
	I1002 10:37:08.952877  340248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 10:37:08.952984  340248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 10:37:08.953086  340248 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 10:37:09.203549  340248 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 10:37:09.415991  340248 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 10:37:09.641476  340248 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 10:37:09.850221  340248 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 10:37:09.850393  340248 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-304007 localhost] and IPs [192.168.39.235 127.0.0.1 ::1]
	I1002 10:37:10.233090  340248 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 10:37:10.233375  340248 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-304007 localhost] and IPs [192.168.39.235 127.0.0.1 ::1]
	I1002 10:37:10.653135  340248 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 10:37:11.024186  340248 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 10:37:11.458693  340248 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 10:37:11.458940  340248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:37:11.574032  340248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 10:37:11.789928  340248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 10:37:11.923807  340248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:37:12.097218  340248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:37:12.097622  340248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:37:12.100115  340248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:37:12.102051  340248 out.go:204]   - Booting up control plane ...
	I1002 10:37:12.102195  340248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:37:12.102337  340248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:37:12.102618  340248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:37:12.117946  340248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:37:12.118693  340248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:37:12.118761  340248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 10:37:12.239367  340248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 10:37:19.740607  340248 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502348 seconds
	I1002 10:37:19.740771  340248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 10:37:19.759274  340248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 10:37:20.289675  340248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 10:37:20.289896  340248 kubeadm.go:322] [mark-control-plane] Marking the node addons-304007 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 10:37:20.804494  340248 kubeadm.go:322] [bootstrap-token] Using token: z98caj.cjxtrmdishs0e76w
	I1002 10:37:20.806247  340248 out.go:204]   - Configuring RBAC rules ...
	I1002 10:37:20.806412  340248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 10:37:20.811482  340248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 10:37:20.824241  340248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 10:37:20.827885  340248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 10:37:20.831190  340248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 10:37:20.839972  340248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 10:37:20.858447  340248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 10:37:21.106951  340248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 10:37:21.217876  340248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 10:37:21.217898  340248 kubeadm.go:322] 
	I1002 10:37:21.218005  340248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 10:37:21.218025  340248 kubeadm.go:322] 
	I1002 10:37:21.218118  340248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 10:37:21.218131  340248 kubeadm.go:322] 
	I1002 10:37:21.218151  340248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 10:37:21.218201  340248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 10:37:21.218297  340248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 10:37:21.218323  340248 kubeadm.go:322] 
	I1002 10:37:21.218424  340248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 10:37:21.218436  340248 kubeadm.go:322] 
	I1002 10:37:21.218518  340248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 10:37:21.218529  340248 kubeadm.go:322] 
	I1002 10:37:21.218600  340248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 10:37:21.218696  340248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 10:37:21.218794  340248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 10:37:21.218803  340248 kubeadm.go:322] 
	I1002 10:37:21.218869  340248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 10:37:21.218976  340248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 10:37:21.218993  340248 kubeadm.go:322] 
	I1002 10:37:21.219100  340248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z98caj.cjxtrmdishs0e76w \
	I1002 10:37:21.219210  340248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 10:37:21.219234  340248 kubeadm.go:322] 	--control-plane 
	I1002 10:37:21.219239  340248 kubeadm.go:322] 
	I1002 10:37:21.219348  340248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 10:37:21.219360  340248 kubeadm.go:322] 
	I1002 10:37:21.219458  340248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z98caj.cjxtrmdishs0e76w \
	I1002 10:37:21.219606  340248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 10:37:21.219847  340248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 10:37:21.219876  340248 cni.go:84] Creating CNI manager for ""
	I1002 10:37:21.219887  340248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:37:21.222463  340248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 10:37:21.224045  340248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 10:37:21.245452  340248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 10:37:21.310819  340248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:37:21.310897  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:21.310961  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=addons-304007 minikube.k8s.io/updated_at=2023_10_02T10_37_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:21.468169  340248 ops.go:34] apiserver oom_adj: -16
	I1002 10:37:21.468524  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:21.622463  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:22.219974  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:22.720107  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:23.219349  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:23.719325  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:24.219400  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:24.719528  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:25.220159  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:25.719394  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:26.219990  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:26.719536  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:27.220258  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:27.719718  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:28.219816  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:28.720062  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:29.220197  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:29.719608  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:30.219982  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:30.719562  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:31.220357  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:31.719711  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:32.219396  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:32.719592  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:33.219398  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:33.719606  340248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:37:33.925504  340248 kubeadm.go:1081] duration metric: took 12.614654521s to wait for elevateKubeSystemPrivileges.
	I1002 10:37:33.925539  340248 kubeadm.go:406] StartCluster complete in 25.593384686s
	I1002 10:37:33.925560  340248 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:33.925676  340248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:37:33.926036  340248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:37:33.926263  340248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:37:33.926321  340248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1002 10:37:33.926453  340248 addons.go:69] Setting volumesnapshots=true in profile "addons-304007"
	I1002 10:37:33.926463  340248 addons.go:69] Setting gcp-auth=true in profile "addons-304007"
	I1002 10:37:33.926478  340248 addons.go:231] Setting addon volumesnapshots=true in "addons-304007"
	I1002 10:37:33.926489  340248 mustload.go:65] Loading cluster: addons-304007
	I1002 10:37:33.926501  340248 config.go:182] Loaded profile config "addons-304007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 10:37:33.926526  340248 addons.go:69] Setting cloud-spanner=true in profile "addons-304007"
	I1002 10:37:33.926541  340248 addons.go:69] Setting helm-tiller=true in profile "addons-304007"
	I1002 10:37:33.926546  340248 addons.go:231] Setting addon cloud-spanner=true in "addons-304007"
	I1002 10:37:33.926551  340248 addons.go:231] Setting addon helm-tiller=true in "addons-304007"
	I1002 10:37:33.926617  340248 addons.go:69] Setting inspektor-gadget=true in profile "addons-304007"
	I1002 10:37:33.926638  340248 addons.go:231] Setting addon inspektor-gadget=true in "addons-304007"
	I1002 10:37:33.926644  340248 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-304007"
	I1002 10:37:33.926684  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.926683  340248 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-304007"
	I1002 10:37:33.926702  340248 config.go:182] Loaded profile config "addons-304007": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 10:37:33.926531  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.926571  340248 addons.go:69] Setting default-storageclass=true in profile "addons-304007"
	I1002 10:37:33.926839  340248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-304007"
	I1002 10:37:33.927101  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927106  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927111  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.926506  340248 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-304007"
	I1002 10:37:33.927139  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.927147  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.927180  340248 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-304007"
	I1002 10:37:33.926595  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.926586  340248 addons.go:69] Setting metrics-server=true in profile "addons-304007"
	I1002 10:37:33.927242  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927243  340248 addons.go:231] Setting addon metrics-server=true in "addons-304007"
	I1002 10:37:33.927261  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.927298  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.926607  340248 addons.go:69] Setting ingress-dns=true in profile "addons-304007"
	I1002 10:37:33.927341  340248 addons.go:231] Setting addon ingress-dns=true in "addons-304007"
	I1002 10:37:33.926606  340248 addons.go:69] Setting registry=true in profile "addons-304007"
	I1002 10:37:33.927356  340248 addons.go:231] Setting addon registry=true in "addons-304007"
	I1002 10:37:33.926605  340248 addons.go:69] Setting storage-provisioner=true in profile "addons-304007"
	I1002 10:37:33.927368  340248 addons.go:231] Setting addon storage-provisioner=true in "addons-304007"
	I1002 10:37:33.926687  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.927515  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927564  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.926598  340248 addons.go:69] Setting ingress=true in profile "addons-304007"
	I1002 10:37:33.927613  340248 addons.go:231] Setting addon ingress=true in "addons-304007"
	I1002 10:37:33.927659  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.927688  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927718  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.927223  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.927770  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.927795  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.927831  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.928005  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.928032  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.928083  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.928152  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.928176  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.928186  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.928225  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.928334  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.928394  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.928420  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.928638  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.928671  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.928711  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.947703  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I1002 10:37:33.947716  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I1002 10:37:33.947703  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I1002 10:37:33.948382  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.948413  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.948475  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.948958  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.948983  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.949123  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.949142  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.949261  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.949278  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.949507  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.949604  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.950179  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.950190  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.950221  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.950334  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.950729  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.952295  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I1002 10:37:33.956736  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I1002 10:37:33.957117  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I1002 10:37:33.957246  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.957266  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I1002 10:37:33.957668  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.957776  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.957944  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.957959  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.958095  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.958108  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.958222  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.958237  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.958295  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.958514  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:33.958577  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.958579  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.958875  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.958916  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.959165  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.959174  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:33.959372  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.959415  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.959585  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.959616  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.962758  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.962784  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.962855  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.963230  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.963272  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.964462  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.964666  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:33.965564  340248 addons.go:231] Setting addon default-storageclass=true in "addons-304007"
	I1002 10:37:33.965605  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.966038  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.966070  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.967379  340248 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-304007"
	I1002 10:37:33.967428  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:33.967795  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.967833  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.988162  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I1002 10:37:33.988805  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.989420  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.989442  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.990053  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.990662  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.990704  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.992402  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40373
	I1002 10:37:33.992662  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I1002 10:37:33.992772  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44675
	I1002 10:37:33.992937  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I1002 10:37:33.993073  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.993124  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.993614  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.993643  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.993655  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.994101  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.994244  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.994257  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.994538  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:33.994641  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:33.994728  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.995242  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:33.995281  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:33.995989  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.996007  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.996458  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:33.996647  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:33.996710  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:33.999317  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:33.999378  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:33.999325  340248 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1002 10:37:34.001079  340248 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1002 10:37:34.001102  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1002 10:37:34.001123  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:33.999133  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 10:37:33.999770  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:33.999971  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.003001  340248 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1002 10:37:34.001779  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.002068  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.004465  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.004472  340248 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1002 10:37:34.004489  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 10:37:34.004512  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.004719  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I1002 10:37:34.004929  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.004955  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.005298  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.005631  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.005869  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.005906  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.006144  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.006775  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I1002 10:37:34.007106  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.007113  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.008957  340248 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.0
	I1002 10:37:34.007785  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.008078  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.008548  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.008625  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1002 10:37:34.009092  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.009730  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.013282  340248 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:34.010170  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.010332  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.010916  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.010935  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.010957  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I1002 10:37:34.011002  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.011330  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.012363  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I1002 10:37:34.013135  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I1002 10:37:34.013572  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I1002 10:37:34.016375  340248 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:34.014712  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.014742  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.015102  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.015113  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.015145  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.015183  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.015238  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.015396  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.015404  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.015717  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.018279  340248 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 10:37:34.018302  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1002 10:37:34.018323  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.018461  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.018483  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.018485  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.018587  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.019742  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.019789  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.019857  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.019861  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.019907  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.019921  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.019921  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.019934  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.019742  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.020477  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.020482  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.020555  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.020598  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.020851  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.020912  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.021035  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.021074  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.021168  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.021211  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.021169  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.021281  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.023118  340248 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1002 10:37:34.022124  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.023204  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.023811  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.024354  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.024397  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.024422  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.024454  340248 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1002 10:37:34.024474  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1002 10:37:34.024497  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.024830  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.024909  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.025088  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.025266  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.026443  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43871
	I1002 10:37:34.026685  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:34.026725  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:34.026947  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.027380  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.027399  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.027768  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.027937  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.028445  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.029194  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.029222  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.029721  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.029944  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.030013  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.031974  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 10:37:34.030432  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.032586  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I1002 10:37:34.033561  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 10:37:34.033574  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 10:37:34.033593  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.033937  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.034825  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.035369  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.035386  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.036309  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.037133  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.037608  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.037634  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.037796  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.037912  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.037985  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.038093  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.038245  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.039873  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.040166  340248 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 10:37:34.040183  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 10:37:34.040199  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.042670  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I1002 10:37:34.043185  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.044115  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.044136  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.044240  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.044334  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.044354  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.044633  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.044822  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.044884  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1002 10:37:34.045036  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.045139  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.045264  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.045295  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.045361  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.045757  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.045770  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.046042  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.046163  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.047593  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.049699  340248 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1002 10:37:34.048161  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.049234  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I1002 10:37:34.051251  340248 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 10:37:34.051271  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1002 10:37:34.051290  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.051570  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.053113  340248 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1002 10:37:34.051816  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44931
	I1002 10:37:34.052730  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.054414  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.054510  340248 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 10:37:34.054519  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 10:37:34.054530  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.055099  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.055464  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.055529  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.055573  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.056338  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.056361  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.056876  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.056900  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.056933  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.057077  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.057237  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.057302  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.057493  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.057667  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.058172  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.058462  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.058534  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.058551  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.058594  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.060255  340248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:37:34.058949  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.059968  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.061016  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I1002 10:37:34.061954  340248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:37:34.061964  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 10:37:34.061978  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.062250  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.063985  340248 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 10:37:34.062592  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.062839  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.064977  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.065861  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.065877  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.067144  340248 out.go:177]   - Using image docker.io/busybox:stable
	I1002 10:37:34.065679  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.066542  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.068594  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.068722  340248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 10:37:34.068741  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 10:37:34.068761  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.068826  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.068972  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.069024  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.069233  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.069339  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.071206  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.072858  340248 out.go:177]   - Using image docker.io/registry:2.8.1
	I1002 10:37:34.071809  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I1002 10:37:34.074219  340248 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1002 10:37:34.072036  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.072621  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.073214  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:34.074301  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.075493  340248 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 10:37:34.075504  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.075507  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1002 10:37:34.074528  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.075525  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.074770  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:34.075549  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:34.076105  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.076119  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:34.076253  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.076487  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:34.078310  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:34.078824  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.080343  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 10:37:34.079386  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.079589  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.082925  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 10:37:34.081708  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.081858  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.085387  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 10:37:34.084324  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.085587  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.086795  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 10:37:34.088226  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 10:37:34.089598  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 10:37:34.090848  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 10:37:34.092081  340248 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 10:37:34.093205  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 10:37:34.093224  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 10:37:34.093244  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:34.096113  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.096478  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:34.096513  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:34.096637  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:34.096828  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:34.096978  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:34.097118  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:34.250876  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 10:37:34.269503  340248 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1002 10:37:34.269522  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1002 10:37:34.281853  340248 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1002 10:37:34.281877  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1002 10:37:34.293327  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 10:37:34.308517  340248 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 10:37:34.308539  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 10:37:34.333269  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 10:37:34.411065  340248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 10:37:34.411092  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 10:37:34.426704  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:37:34.462271  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 10:37:34.463266  340248 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1002 10:37:34.463290  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1002 10:37:34.470381  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 10:37:34.470999  340248 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 10:37:34.471016  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 10:37:34.471620  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 10:37:34.471633  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 10:37:34.476106  340248 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1002 10:37:34.476125  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1002 10:37:34.479333  340248 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 10:37:34.479347  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	W1002 10:37:34.485900  340248 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-304007" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1002 10:37:34.485922  340248 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1002 10:37:34.485943  340248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 10:37:34.487685  340248 out.go:177] * Verifying Kubernetes components...
	I1002 10:37:34.488946  340248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:37:34.523441  340248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 10:37:34.523462  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 10:37:34.585742  340248 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1002 10:37:34.585768  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1002 10:37:34.616903  340248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 10:37:34.623792  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1002 10:37:34.685640  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 10:37:34.685665  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 10:37:34.692389  340248 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 10:37:34.692411  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 10:37:34.699386  340248 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 10:37:34.699404  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 10:37:34.736447  340248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 10:37:34.736488  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 10:37:34.762794  340248 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1002 10:37:34.762820  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1002 10:37:34.833035  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 10:37:34.833069  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 10:37:34.838211  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 10:37:34.843323  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 10:37:34.843343  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 10:37:34.881283  340248 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1002 10:37:34.881308  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1002 10:37:34.893323  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 10:37:34.939995  340248 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:34.940019  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 10:37:34.983696  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 10:37:34.983722  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 10:37:34.995833  340248 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 10:37:34.995859  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1002 10:37:35.021140  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:35.074134  340248 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 10:37:35.074166  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 10:37:35.113056  340248 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 10:37:35.113081  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1002 10:37:35.205366  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 10:37:35.205399  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 10:37:35.214857  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 10:37:35.331287  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 10:37:35.331313  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 10:37:35.387315  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 10:37:35.387338  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 10:37:35.439080  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 10:37:35.439106  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 10:37:35.474183  340248 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 10:37:35.474213  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 10:37:35.512525  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 10:37:39.149348  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.898428019s)
	I1002 10:37:39.149410  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.856047716s)
	I1002 10:37:39.149453  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:39.149475  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:39.149416  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:39.149538  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:39.149727  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:39.149753  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:39.149773  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:39.149782  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:39.151720  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:39.151739  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:39.151753  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:39.151752  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:39.151731  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:39.151791  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:39.151804  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:39.151818  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:39.152034  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:39.152049  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:39.152051  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:39.355067  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:39.355089  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:39.355427  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:39.355449  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:39.355472  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:41.020510  340248 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 10:37:41.020550  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:41.024308  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:41.024805  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:41.024844  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:41.025078  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:41.025323  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:41.025575  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:41.025729  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:41.239443  340248 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 10:37:41.353382  340248 addons.go:231] Setting addon gcp-auth=true in "addons-304007"
	I1002 10:37:41.353449  340248 host.go:66] Checking if "addons-304007" exists ...
	I1002 10:37:41.353803  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:41.353861  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:41.369078  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I1002 10:37:41.369610  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:41.370135  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:41.370154  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:41.370510  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:41.371057  340248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:37:41.371096  340248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:37:41.386378  340248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33751
	I1002 10:37:41.386901  340248 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:37:41.387375  340248 main.go:141] libmachine: Using API Version  1
	I1002 10:37:41.387406  340248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:37:41.387827  340248 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:37:41.388083  340248 main.go:141] libmachine: (addons-304007) Calling .GetState
	I1002 10:37:41.389974  340248 main.go:141] libmachine: (addons-304007) Calling .DriverName
	I1002 10:37:41.390224  340248 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 10:37:41.390245  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHHostname
	I1002 10:37:41.393358  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:41.393810  340248 main.go:141] libmachine: (addons-304007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:b6:9e", ip: ""} in network mk-addons-304007: {Iface:virbr1 ExpiryTime:2023-10-02 11:36:48 +0000 UTC Type:0 Mac:52:54:00:49:b6:9e Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:addons-304007 Clientid:01:52:54:00:49:b6:9e}
	I1002 10:37:41.393845  340248 main.go:141] libmachine: (addons-304007) DBG | domain addons-304007 has defined IP address 192.168.39.235 and MAC address 52:54:00:49:b6:9e in network mk-addons-304007
	I1002 10:37:41.393970  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHPort
	I1002 10:37:41.394190  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHKeyPath
	I1002 10:37:41.394338  340248 main.go:141] libmachine: (addons-304007) Calling .GetSSHUsername
	I1002 10:37:41.394473  340248 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/addons-304007/id_rsa Username:docker}
	I1002 10:37:42.780323  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.447008351s)
	I1002 10:37:42.780391  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.780406  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.780464  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.35372441s)
	I1002 10:37:42.780522  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.780558  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.318242143s)
	I1002 10:37:42.780598  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.780572  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.780636  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.310226903s)
	I1002 10:37:42.780658  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.780613  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.780673  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.780924  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.780940  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.781002  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.780957  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.781016  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.780966  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.781026  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.781038  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.781054  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.781068  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.781006  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.781125  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.781145  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.781178  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.781213  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.781221  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.781229  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.781237  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.781343  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.781389  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.781416  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.781427  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.781440  340248 addons.go:467] Verifying addon ingress=true in "addons-304007"
	I1002 10:37:42.785008  340248 out.go:177] * Verifying ingress addon...
	I1002 10:37:42.781039  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.782172  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.783123  340248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.294145382s)
	I1002 10:37:42.783169  340248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.166237627s)
	I1002 10:37:42.783173  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.783201  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.783224  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.159401337s)
	I1002 10:37:42.783277  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.945037798s)
	I1002 10:37:42.783384  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.890022391s)
	I1002 10:37:42.783588  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.762403638s)
	I1002 10:37:42.783675  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.568780549s)
	I1002 10:37:42.787032  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787051  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.787060  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.787117  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787137  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.787138  340248 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 10:37:42.787157  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 10:37:42.787170  340248 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 10:37:42.787196  340248 retry.go:31] will retry after 320.276819ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 10:37:42.787176  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787221  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.787122  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787260  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.787613  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.787614  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.787633  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.787633  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.787642  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.787644  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787647  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.787655  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.787659  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.787664  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.787670  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.787675  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.787683  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.787656  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.788024  340248 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 10:37:42.788033  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.788049  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.788069  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.788074  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.788228  340248 node_ready.go:35] waiting up to 6m0s for node "addons-304007" to be "Ready" ...
	I1002 10:37:42.788251  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.788264  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.788274  340248 addons.go:467] Verifying addon registry=true in "addons-304007"
	I1002 10:37:42.790161  340248 out.go:177] * Verifying registry addon...
	I1002 10:37:42.787675  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.790251  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.791369  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.791435  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.791910  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.791923  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.791932  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.792223  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.792220  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.792237  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:42.792238  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.792692  340248 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 10:37:42.793926  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.793940  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:42.793958  340248 addons.go:467] Verifying addon metrics-server=true in "addons-304007"
	I1002 10:37:42.802198  340248 node_ready.go:49] node "addons-304007" has status "Ready":"True"
	I1002 10:37:42.802220  340248 node_ready.go:38] duration metric: took 13.974013ms waiting for node "addons-304007" to be "Ready" ...
	I1002 10:37:42.802231  340248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:37:42.817205  340248 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 10:37:42.817233  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:42.834431  340248 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 10:37:42.834457  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:42.855604  340248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g896d" in "kube-system" namespace to be "Ready" ...
	I1002 10:37:42.940523  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:42.951648  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:42.955034  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:42.955068  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:42.955359  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:42.955383  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:43.108388  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 10:37:43.355699  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.843112831s)
	I1002 10:37:43.355762  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:43.355783  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:43.355799  340248 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.965550678s)
	I1002 10:37:43.357542  340248 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 10:37:43.356090  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:43.356114  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:43.358858  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:43.358875  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:43.358888  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:43.360197  340248 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1002 10:37:43.359185  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:43.360272  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:43.360290  340248 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-304007"
	I1002 10:37:43.359216  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:43.361774  340248 out.go:177] * Verifying csi-hostpath-driver addon...
	I1002 10:37:43.361691  340248 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 10:37:43.363125  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 10:37:43.364092  340248 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 10:37:43.394573  340248 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 10:37:43.394604  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 10:37:43.432707  340248 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 10:37:43.432740  340248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1002 10:37:43.467655  340248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 10:37:43.477454  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:43.477645  340248 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 10:37:43.477667  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:43.557889  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:43.560732  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:43.958644  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:43.977639  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:44.087052  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:44.483665  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:44.512582  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:44.570767  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:44.995069  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:44.995851  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:45.036942  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:45.100916  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:45.459012  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:45.477279  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:45.564105  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:45.858171  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.74972561s)
	I1002 10:37:45.858250  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:45.858272  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:45.858652  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:45.858678  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:45.858689  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:45.858695  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:45.858744  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:45.859010  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:45.859059  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:45.960864  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:45.966946  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:46.110449  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:46.376207  340248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.908500252s)
	I1002 10:37:46.376274  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:46.376290  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:46.376673  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:46.376700  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:46.376711  340248 main.go:141] libmachine: Making call to close driver server
	I1002 10:37:46.376720  340248 main.go:141] libmachine: (addons-304007) Calling .Close
	I1002 10:37:46.376722  340248 main.go:141] libmachine: (addons-304007) DBG | Closing plugin on server side
	I1002 10:37:46.376983  340248 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:37:46.377005  340248 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:37:46.378724  340248 addons.go:467] Verifying addon gcp-auth=true in "addons-304007"
	I1002 10:37:46.380489  340248 out.go:177] * Verifying gcp-auth addon...
	I1002 10:37:46.383152  340248 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 10:37:46.407715  340248 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 10:37:46.407738  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:46.459862  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:46.479450  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:46.479663  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:46.572896  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:46.947735  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:46.956839  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:46.965954  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:47.064820  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:47.447396  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:47.458363  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:47.466760  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:47.499471  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:47.565780  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:47.946810  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:47.956459  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:47.964227  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:48.063733  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:48.445269  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:48.456884  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:48.466851  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:48.565959  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:48.946189  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:48.957101  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:48.962946  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:49.063918  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:49.446393  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:49.456951  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:49.463475  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:49.586610  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:49.946440  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:49.957724  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:49.963944  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:50.005089  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:50.071872  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:50.451199  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:50.468586  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:50.468821  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:50.567154  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:50.959599  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:50.967246  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:50.967882  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:51.069996  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:51.451869  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:51.456845  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:51.464277  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:51.577216  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:51.945070  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:51.956969  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:51.963941  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:52.063402  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:52.445354  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:52.461903  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:52.472541  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:52.495877  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:52.564052  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:52.949099  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:52.957154  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:52.969258  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:53.065415  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:53.488063  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:53.488474  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:53.489073  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:53.581359  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:53.945268  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:53.973044  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:54.001623  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:54.068018  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:54.463256  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:54.467275  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:54.468451  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:54.496573  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:54.572295  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:54.945890  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:54.957620  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:54.963551  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:55.064412  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:55.445134  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:55.459527  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:55.463433  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:55.564256  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:55.946622  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:55.963633  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:55.968000  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:56.064936  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:56.447877  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:56.458364  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:56.463984  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:56.498703  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:56.566515  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:56.962372  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:56.973888  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:56.978692  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:57.064904  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:57.449090  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:57.461997  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:57.464971  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:57.566792  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:57.945959  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:57.957117  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:57.966634  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:58.064942  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:58.447533  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:58.459681  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:58.463310  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:58.564803  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:58.945057  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:58.957698  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:58.965391  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:58.999234  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:37:59.064749  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:59.446148  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:59.459188  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:59.465910  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:37:59.568179  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:37:59.947212  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:37:59.958299  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:37:59.964060  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:00.067080  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:00.445328  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:00.457444  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:00.462826  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:00.565401  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:00.946600  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:00.957409  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:00.973270  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:01.001259  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:01.064613  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:01.452136  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:01.471509  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:01.477665  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:01.581440  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:01.947144  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:01.958238  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:01.982439  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:02.064250  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:02.445405  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:02.457583  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:02.463072  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:02.565067  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:02.948774  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:02.959484  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:02.964209  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:03.066492  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:03.445801  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:03.458008  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:03.463841  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:03.496897  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:03.563961  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:03.945307  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:03.957144  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:03.964325  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:04.072097  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:04.446978  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:04.458890  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:04.463616  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:04.563997  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:04.946116  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:04.957091  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:04.964798  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:05.064156  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:05.445301  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:05.457276  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:05.464453  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:05.564153  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:05.945116  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:05.956619  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:05.963300  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:05.997889  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:06.065481  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:06.445830  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:06.456401  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:06.463697  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:06.563941  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:06.947166  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:06.956448  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:06.963637  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:07.064044  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:07.461962  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:07.477330  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:07.482120  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:07.565554  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:07.945286  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:07.957172  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:07.963454  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:07.999627  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:08.065302  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:08.445527  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:08.456655  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:08.463789  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:08.566117  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:08.945548  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:08.956246  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:08.964059  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:09.063457  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:09.445641  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:09.458057  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:09.463257  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:09.583874  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:09.946827  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:09.957554  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:09.963619  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:10.065107  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:10.444920  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:10.456978  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:10.463862  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:10.497042  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:10.565057  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:10.946124  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:10.957136  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:10.964020  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:11.068610  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:11.447418  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:11.458253  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:11.464167  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:11.564248  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:11.945986  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:11.957910  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:11.964362  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:12.064591  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:12.445507  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:12.467126  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:12.467301  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:12.498577  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:12.563972  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:12.945035  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:12.957782  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:12.963596  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:13.064012  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:13.454133  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:13.456843  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:13.467593  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:13.564152  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:13.945413  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:13.956769  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:13.966538  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:14.064175  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:14.445640  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:14.456325  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:14.462956  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:14.564154  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:14.948156  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:14.956393  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:14.965176  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:14.997422  340248 pod_ready.go:102] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:15.063842  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:15.445258  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:15.459834  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:15.472998  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:15.565288  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:15.945081  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:15.973170  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:15.976006  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:16.067131  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:16.447304  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:16.456624  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:16.463709  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:16.563803  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:16.945365  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:16.956528  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:16.963395  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:17.064776  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:17.840125  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:17.852693  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:17.853203  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:17.854897  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:17.893111  340248 pod_ready.go:92] pod "coredns-5dd5756b68-g896d" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:17.893135  340248 pod_ready.go:81] duration metric: took 35.037505968s waiting for pod "coredns-5dd5756b68-g896d" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:17.893146  340248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hjmvh" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:17.966573  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:17.968826  340248 pod_ready.go:92] pod "coredns-5dd5756b68-hjmvh" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:17.968847  340248 pod_ready.go:81] duration metric: took 75.695272ms waiting for pod "coredns-5dd5756b68-hjmvh" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:17.968856  340248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:17.974779  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:17.975308  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:17.996087  340248 pod_ready.go:92] pod "etcd-addons-304007" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:17.996112  340248 pod_ready.go:81] duration metric: took 27.248923ms waiting for pod "etcd-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:17.996125  340248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.021733  340248 pod_ready.go:92] pod "kube-apiserver-addons-304007" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:18.021756  340248 pod_ready.go:81] duration metric: took 25.623494ms waiting for pod "kube-apiserver-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.021768  340248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.049553  340248 pod_ready.go:92] pod "kube-controller-manager-addons-304007" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:18.049586  340248 pod_ready.go:81] duration metric: took 27.804118ms waiting for pod "kube-controller-manager-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.049602  340248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qh2xl" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.068722  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:18.071275  340248 pod_ready.go:92] pod "kube-proxy-qh2xl" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:18.071294  340248 pod_ready.go:81] duration metric: took 21.68522ms waiting for pod "kube-proxy-qh2xl" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.071303  340248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.445420  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:18.456705  340248 pod_ready.go:92] pod "kube-scheduler-addons-304007" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:18.456726  340248 pod_ready.go:81] duration metric: took 385.416235ms waiting for pod "kube-scheduler-addons-304007" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.456737  340248 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-cgzmw" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:18.458744  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:18.464622  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:18.577488  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:18.945716  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:18.957920  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:18.963785  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:19.064848  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:19.446264  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:19.459642  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:19.465797  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:19.566322  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:19.946728  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:19.956650  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:19.963469  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:20.064403  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:20.445136  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:20.457657  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:20.464255  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:20.563313  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:20.765994  340248 pod_ready.go:102] pod "metrics-server-7c66d45ddc-cgzmw" in "kube-system" namespace has status "Ready":"False"
	I1002 10:38:20.948473  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:20.968391  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:20.971550  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:21.073223  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:21.446791  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:21.456201  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:21.464001  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:21.566783  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:21.946034  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:21.959144  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:21.963865  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:22.063946  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:22.264569  340248 pod_ready.go:92] pod "metrics-server-7c66d45ddc-cgzmw" in "kube-system" namespace has status "Ready":"True"
	I1002 10:38:22.264589  340248 pod_ready.go:81] duration metric: took 3.807846341s waiting for pod "metrics-server-7c66d45ddc-cgzmw" in "kube-system" namespace to be "Ready" ...
	I1002 10:38:22.264608  340248 pod_ready.go:38] duration metric: took 39.46236622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:38:22.264630  340248 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:38:22.264681  340248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:38:22.282250  340248 api_server.go:72] duration metric: took 47.796248943s to wait for apiserver process to appear ...
	I1002 10:38:22.282275  340248 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:38:22.282292  340248 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I1002 10:38:22.288569  340248 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I1002 10:38:22.290301  340248 api_server.go:141] control plane version: v1.28.2
	I1002 10:38:22.290329  340248 api_server.go:131] duration metric: took 8.047807ms to wait for apiserver health ...
	I1002 10:38:22.290338  340248 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:38:22.303608  340248 system_pods.go:59] 18 kube-system pods found
	I1002 10:38:22.303641  340248 system_pods.go:61] "coredns-5dd5756b68-g896d" [318da70f-2bf4-4f3e-abfc-448ce49880d6] Running
	I1002 10:38:22.303648  340248 system_pods.go:61] "coredns-5dd5756b68-hjmvh" [c59316d1-1292-4d7d-a697-011ff752d5cb] Running
	I1002 10:38:22.303655  340248 system_pods.go:61] "csi-hostpath-attacher-0" [7b32f84a-5677-4446-99a7-5680bc502c3c] Running
	I1002 10:38:22.303660  340248 system_pods.go:61] "csi-hostpath-resizer-0" [b478ddd1-5756-4803-aef9-621f7845b79a] Running
	I1002 10:38:22.303671  340248 system_pods.go:61] "csi-hostpathplugin-s5j7g" [dd845b1f-dd3a-4235-b1a3-e1009cad69d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 10:38:22.303680  340248 system_pods.go:61] "etcd-addons-304007" [9a20e79a-3878-43dd-b46e-c2f3afbeb383] Running
	I1002 10:38:22.303688  340248 system_pods.go:61] "kube-apiserver-addons-304007" [18c26f4e-ffbb-4de3-8aa4-a8f4652b5728] Running
	I1002 10:38:22.303696  340248 system_pods.go:61] "kube-controller-manager-addons-304007" [3096527c-939b-4108-a1b7-1347b10a5a91] Running
	I1002 10:38:22.303704  340248 system_pods.go:61] "kube-ingress-dns-minikube" [db5bb398-b6a5-499a-93a1-d21f68e99dd6] Running
	I1002 10:38:22.303712  340248 system_pods.go:61] "kube-proxy-qh2xl" [3c1fd7d4-e250-4d95-99df-7d4175f54858] Running
	I1002 10:38:22.303723  340248 system_pods.go:61] "kube-scheduler-addons-304007" [c519efb7-fb6c-400d-9bb0-2204bfff31c5] Running
	I1002 10:38:22.303730  340248 system_pods.go:61] "metrics-server-7c66d45ddc-cgzmw" [c6c6e12d-2982-4aa7-9bcb-8a6224dd0772] Running
	I1002 10:38:22.303740  340248 system_pods.go:61] "registry-proxy-b2tdg" [ebe43d1f-3aef-4e43-8685-e0ac4f3285d8] Running
	I1002 10:38:22.303755  340248 system_pods.go:61] "registry-v682v" [511b1064-d462-426c-9606-a5290d7ea3e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 10:38:22.303771  340248 system_pods.go:61] "snapshot-controller-58dbcc7b99-k5q4z" [3fd6b070-6115-4501-a64b-f6238b476495] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 10:38:22.303785  340248 system_pods.go:61] "snapshot-controller-58dbcc7b99-wzrhf" [7dd2fabf-d2cd-484d-8643-189ffff274a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 10:38:22.303793  340248 system_pods.go:61] "storage-provisioner" [ac6f7688-724c-40be-a61d-5842c7561ff5] Running
	I1002 10:38:22.303802  340248 system_pods.go:61] "tiller-deploy-7b677967b9-npbhs" [37a2928b-d3bd-4586-9c85-bfdbba5b2c4a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1002 10:38:22.303809  340248 system_pods.go:74] duration metric: took 13.465195ms to wait for pod list to return data ...
	I1002 10:38:22.303819  340248 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:38:22.308404  340248 default_sa.go:45] found service account: "default"
	I1002 10:38:22.308425  340248 default_sa.go:55] duration metric: took 4.596793ms for default service account to be created ...
	I1002 10:38:22.308432  340248 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:38:22.318615  340248 system_pods.go:86] 18 kube-system pods found
	I1002 10:38:22.318640  340248 system_pods.go:89] "coredns-5dd5756b68-g896d" [318da70f-2bf4-4f3e-abfc-448ce49880d6] Running
	I1002 10:38:22.318648  340248 system_pods.go:89] "coredns-5dd5756b68-hjmvh" [c59316d1-1292-4d7d-a697-011ff752d5cb] Running
	I1002 10:38:22.318656  340248 system_pods.go:89] "csi-hostpath-attacher-0" [7b32f84a-5677-4446-99a7-5680bc502c3c] Running
	I1002 10:38:22.318662  340248 system_pods.go:89] "csi-hostpath-resizer-0" [b478ddd1-5756-4803-aef9-621f7845b79a] Running
	I1002 10:38:22.318673  340248 system_pods.go:89] "csi-hostpathplugin-s5j7g" [dd845b1f-dd3a-4235-b1a3-e1009cad69d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 10:38:22.318681  340248 system_pods.go:89] "etcd-addons-304007" [9a20e79a-3878-43dd-b46e-c2f3afbeb383] Running
	I1002 10:38:22.318689  340248 system_pods.go:89] "kube-apiserver-addons-304007" [18c26f4e-ffbb-4de3-8aa4-a8f4652b5728] Running
	I1002 10:38:22.318699  340248 system_pods.go:89] "kube-controller-manager-addons-304007" [3096527c-939b-4108-a1b7-1347b10a5a91] Running
	I1002 10:38:22.318709  340248 system_pods.go:89] "kube-ingress-dns-minikube" [db5bb398-b6a5-499a-93a1-d21f68e99dd6] Running
	I1002 10:38:22.318720  340248 system_pods.go:89] "kube-proxy-qh2xl" [3c1fd7d4-e250-4d95-99df-7d4175f54858] Running
	I1002 10:38:22.318728  340248 system_pods.go:89] "kube-scheduler-addons-304007" [c519efb7-fb6c-400d-9bb0-2204bfff31c5] Running
	I1002 10:38:22.318739  340248 system_pods.go:89] "metrics-server-7c66d45ddc-cgzmw" [c6c6e12d-2982-4aa7-9bcb-8a6224dd0772] Running
	I1002 10:38:22.318749  340248 system_pods.go:89] "registry-proxy-b2tdg" [ebe43d1f-3aef-4e43-8685-e0ac4f3285d8] Running
	I1002 10:38:22.318759  340248 system_pods.go:89] "registry-v682v" [511b1064-d462-426c-9606-a5290d7ea3e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 10:38:22.318775  340248 system_pods.go:89] "snapshot-controller-58dbcc7b99-k5q4z" [3fd6b070-6115-4501-a64b-f6238b476495] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 10:38:22.318790  340248 system_pods.go:89] "snapshot-controller-58dbcc7b99-wzrhf" [7dd2fabf-d2cd-484d-8643-189ffff274a7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 10:38:22.318799  340248 system_pods.go:89] "storage-provisioner" [ac6f7688-724c-40be-a61d-5842c7561ff5] Running
	I1002 10:38:22.318810  340248 system_pods.go:89] "tiller-deploy-7b677967b9-npbhs" [37a2928b-d3bd-4586-9c85-bfdbba5b2c4a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1002 10:38:22.318823  340248 system_pods.go:126] duration metric: took 10.384167ms to wait for k8s-apps to be running ...
	I1002 10:38:22.318836  340248 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:38:22.318887  340248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:38:22.337298  340248 system_svc.go:56] duration metric: took 18.45525ms WaitForService to wait for kubelet.
	I1002 10:38:22.337321  340248 kubeadm.go:581] duration metric: took 47.851326949s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:38:22.337341  340248 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:38:22.446006  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:22.457732  340248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 10:38:22.457761  340248 node_conditions.go:123] node cpu capacity is 2
	I1002 10:38:22.457800  340248 node_conditions.go:105] duration metric: took 120.453976ms to run NodePressure ...
	I1002 10:38:22.457816  340248 start.go:228] waiting for startup goroutines ...
	I1002 10:38:22.459677  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:22.466480  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:22.563884  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:22.946756  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:22.957405  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:22.963509  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:23.063643  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:23.445986  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:23.458376  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:23.464116  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:23.564561  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:23.946316  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:23.957064  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:23.966642  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:24.065508  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:24.445841  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:24.458417  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:24.463911  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:24.577849  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:24.945977  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:24.956580  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:24.962925  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:25.064872  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:25.450396  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:25.456283  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:25.462845  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:25.564767  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:25.945776  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:25.956825  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 10:38:25.963316  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:26.063502  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:26.445480  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:26.457774  340248 kapi.go:107] duration metric: took 43.665079205s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 10:38:26.464698  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:26.564281  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:26.949956  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:26.963652  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:27.066416  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:27.447324  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:27.464578  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:27.564337  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:27.953435  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:27.964457  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:28.063664  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:28.445517  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:28.463283  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:28.563899  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:28.945629  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:28.963491  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:29.064798  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:29.445800  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:29.464333  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:29.563745  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:29.946523  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:29.965158  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:30.064951  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:30.445995  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:30.464144  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:30.564339  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:30.945705  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:30.964789  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:31.066406  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:31.445306  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:31.464658  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:31.566099  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:31.946078  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:31.963463  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:32.064683  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:32.445437  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:32.464396  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:32.565773  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:32.946137  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:32.964493  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:33.064960  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:33.445716  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:33.465255  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:33.563902  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:33.946399  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:33.963838  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:34.070039  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:34.446198  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:34.468860  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:34.566987  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:34.946623  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:34.963617  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:35.064839  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:35.445568  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:35.463758  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:35.564202  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:35.945475  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:35.964547  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:36.064275  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:36.444878  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:36.465760  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:36.564218  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:36.945647  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:36.964125  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:37.064765  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:37.445667  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:37.463753  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:37.563768  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:37.946075  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:37.975643  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:38.070034  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:38.449137  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:38.463703  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:38.569125  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:38.946635  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:38.964103  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:39.064130  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:39.450845  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:39.466578  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:39.564805  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:39.946517  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:39.964634  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:40.064475  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:40.445998  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:40.464340  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:40.565428  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:40.947077  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:40.963782  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:41.066583  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:41.446085  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:41.464513  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:41.563880  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:41.953682  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:41.963383  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:42.097510  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:42.451325  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:42.467037  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:42.564396  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:42.952277  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:42.973970  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:43.092298  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:43.679396  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:43.682473  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:43.682585  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:43.948129  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:43.963352  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:44.066466  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:44.446721  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:44.465574  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:44.564072  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:44.945189  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:44.964570  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:45.064120  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:45.445788  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:45.463865  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:45.564231  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:45.949553  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:45.964740  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:46.063776  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:46.446347  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:46.464236  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:46.574592  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:46.944706  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:46.964721  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:47.068857  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:47.445561  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:47.464345  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:47.564161  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:47.948379  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:47.965649  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:48.065727  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:48.445796  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:48.464124  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:48.564744  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:48.946063  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:48.963572  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:49.063895  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:49.445962  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:49.464024  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:49.565381  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:49.945751  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:49.963331  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:50.063398  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:50.445720  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:50.465294  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:50.563665  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:50.945683  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:50.965098  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:51.064159  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:51.445719  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:51.464119  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:51.566196  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:51.948099  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:51.964339  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:52.064858  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:52.445874  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:52.464243  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:52.563501  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:52.945634  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:52.963250  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:53.064613  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:53.445535  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:53.464171  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:53.564189  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:53.948397  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:53.963837  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:54.066723  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:54.786813  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:54.800394  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:54.800823  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:54.945671  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:54.964070  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:55.064804  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:55.446092  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:55.464216  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:55.563490  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:55.945749  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:55.964978  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:56.066243  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:56.449351  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:56.465172  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:56.563897  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:56.945688  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:56.975327  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:57.063948  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:57.445872  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:57.463607  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:57.564886  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:57.949907  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:57.964472  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:58.063851  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:58.449099  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:58.468512  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:58.563702  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:58.945607  340248 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 10:38:58.964293  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:59.069088  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:59.446492  340248 kapi.go:107] duration metric: took 1m16.658466663s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 10:38:59.464861  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:38:59.564865  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:38:59.965502  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:00.065770  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:39:00.464462  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:00.563882  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 10:39:00.964575  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:01.066801  340248 kapi.go:107] duration metric: took 1m17.702702399s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 10:39:01.464208  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:01.964846  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:02.463976  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:02.963965  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:03.463856  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:03.963878  340248 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 10:39:04.463907  340248 kapi.go:107] duration metric: took 1m18.080750254s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 10:39:04.465618  340248 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-304007 cluster.
	I1002 10:39:04.467187  340248 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 10:39:04.468563  340248 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 10:39:04.469903  340248 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1002 10:39:04.471241  340248 addons.go:502] enable addons completed in 1m30.544935732s: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1002 10:39:04.471281  340248 start.go:233] waiting for cluster config update ...
	I1002 10:39:04.471304  340248 start.go:242] writing updated cluster config ...
	I1002 10:39:04.471607  340248 ssh_runner.go:195] Run: rm -f paused
	I1002 10:39:04.523490  340248 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 10:39:04.525526  340248 out.go:177] * Done! kubectl is now configured to use "addons-304007" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 10:36:44 UTC, ends at Mon 2023-10-02 10:42:15 UTC. --
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.453973921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696243335453957719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=4dad5c62-32ee-4eff-92e2-96228cb26c3d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.455004247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5d9c317a-3c94-42d8-837e-992cd01ced55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.455081456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5d9c317a-3c94-42d8-837e-992cd01ced55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.455422799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1b4ea1207f2a571aded6ed5252462bfdbc619edf2c551e376ea64e33372fb10,PodSandboxId:8afdf6e0add037940cc1f3177289caa6248302fcfe3eb1b32509d8df60efe608,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696243328658989312,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-vj6lt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee8e861f-e464-4c25-8b78-c36970ab4c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 8fba4429,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931189e4addfa4cad879a5743bb71f50ccb80ac5bb64993c491ac98fc7fafe76,PodSandboxId:d54d339b87dd0e6987f2b74f7cedc5ff6fcd3d1be7b38a2a2772c5b2f179bce3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243187447553390,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 022886fc-f9e9-4e77-ba67-52cd421e8921,},Annotations:map[string]string{io.kubernet
es.container.hash: 7f07a072,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350a2a51992904c6c9865b6b5f427be60b29a0a2e313ffff5b5cdb1aa29d324b,PodSandboxId:8e4bcd385d11e7d88dcf7616b922705c39176f47890f69a529c0cf2860a6172c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696243157283174454,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-ldbkr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0254fb09-ea15-4286-96e8-d8faaf78ebc9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d09cea8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b88554a18d5928e9df56421e6b140ca6b50bca1d82d73b749266ae5ee93943,PodSandboxId:5c1715ca6065b0542577eb32e58a7331614989ba3643e8cfa307f9069e785224,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696243143815983889,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-76vkr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4caa01fc-509b-4792-908e-2373a0fb46ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf4667,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafeba39e3b59cbd5add9d51c5d00db4e34f037461c523c9850752780e351a5,PodSandboxId:c6eaa06d5fe4f9c8a0296d7edd99797f64a1621583e65bfd57387f2e5a65fac5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243121922813392,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2r4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554eb135-f1bf-46ce-bc40-1bc7c50c1ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c120d77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da95102d620f006616ce07ad435db9df3400569b1847d23e8780855ae4d12a27,PodSandboxId:00c77cf636ee4848489472a2f4ea51127db235e309b711d70a6f46650ffcd679,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243116305804084,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rnksx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3910eada-4775-4bac-a2e5-3141d64ee78b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e71b6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991ca9e1ad5bbe18493a37f09b1840e7b9d98cad8b41ef2ce45ad6b5e550acd2,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243104125843222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58bcea2f5188f34a740d38765d5751215f21fb8db09addbd9fecd090fddd970,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696243071843571568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b80bae0b515beb4e82b4b382bc6189b52e63cf01d9f71edd62378887850280d,PodSandboxId:f668edfcad99be86fee5ddd3b5a3602575f5bd0d310f7692ae095d8b9c682363,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169
be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696243071561944586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qh2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1fd7d4-e250-4d95-99df-7d4175f54858,},Annotations:map[string]string{io.kubernetes.container.hash: 827a3e27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8625273d8cd7402b4a3bae47a5263028ec06ffdfefa4428c20adabf8f69a7f,PodSandboxId:1cd1d68d8ddbe570f31385abddcffdea3bd9293b33a39e668f6690b22de5bcae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062697683676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g896d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 318da70f-2bf4-4f3e-abfc-448ce49880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3ddb9405,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09f1dc600b86be3f72a4eea3b758303ab0ad4df2af3dade909f649f3b80dd6d,PodSandboxId:af72878cda90307a599569f51449b89c1cab06af30ecde5430dc2b37e09b9853,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062504625533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hjmvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59316d1-1292-4d7d-a697-011ff752d5cb,},Annotations:map[string]string{io.kubernetes.container.hash: cb09aa01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38809c6c507d78
495696c5e4ba34eff4c7564a747344e9666548c214d49df6da,PodSandboxId:cd6c1d6966d983aca2d5554da78391d0e9c47f1cdd7b6a9b2898cf8da539b1ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696243034195614704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c3b9fe1dbc9b53b3b5db9e6bcc0c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6b81a09c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181c437daafe239289c1c9eae6a61615664b711a78f6349f6a079c585939749,PodS
andboxId:cacb122238b5c075c03b0ddc7c3801ab65c50c33c1505ac90c46bf971ec47772,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696243034092193701,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77471356828ddd4a2adeaaf4b71c35e,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082ae7a1010f41c415a422dce0484534fae375b92446239043043977951c5152,PodSandboxId:051120
3ca210194873049e300600429f43eeb94dbb4da4509fc6f3ce388242fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696243033853653348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a49f1e5f47c48cf3d595c740b7e7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 5166646c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d28064ec066d0466a4a66e5321069f108326729798d957f3a8e406094f7181,PodSandboxId:f6c737a49e58e4f65fd6d
43d4e603d43bbfe79f3039926e1acc32b3c6e29528e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696243033662074787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17fe5abaf14332d4269de9b95ac3c5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5d9c317a-3c94-42d8-837e-992cd01ced55 name=/run
time.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.497616646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e79be41c-3e63-4a3d-8240-0fd004edee54 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.497673878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e79be41c-3e63-4a3d-8240-0fd004edee54 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.499390430Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ca66bae-aa24-437e-813d-eff34a134dcc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.500533507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696243335500512103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=9ca66bae-aa24-437e-813d-eff34a134dcc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.501133684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ef6657a-5318-4383-8efe-dced1a53574e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.501183548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ef6657a-5318-4383-8efe-dced1a53574e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.501489650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1b4ea1207f2a571aded6ed5252462bfdbc619edf2c551e376ea64e33372fb10,PodSandboxId:8afdf6e0add037940cc1f3177289caa6248302fcfe3eb1b32509d8df60efe608,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696243328658989312,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-vj6lt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee8e861f-e464-4c25-8b78-c36970ab4c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 8fba4429,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931189e4addfa4cad879a5743bb71f50ccb80ac5bb64993c491ac98fc7fafe76,PodSandboxId:d54d339b87dd0e6987f2b74f7cedc5ff6fcd3d1be7b38a2a2772c5b2f179bce3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243187447553390,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 022886fc-f9e9-4e77-ba67-52cd421e8921,},Annotations:map[string]string{io.kubernet
es.container.hash: 7f07a072,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350a2a51992904c6c9865b6b5f427be60b29a0a2e313ffff5b5cdb1aa29d324b,PodSandboxId:8e4bcd385d11e7d88dcf7616b922705c39176f47890f69a529c0cf2860a6172c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696243157283174454,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-ldbkr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0254fb09-ea15-4286-96e8-d8faaf78ebc9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d09cea8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b88554a18d5928e9df56421e6b140ca6b50bca1d82d73b749266ae5ee93943,PodSandboxId:5c1715ca6065b0542577eb32e58a7331614989ba3643e8cfa307f9069e785224,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696243143815983889,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-76vkr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4caa01fc-509b-4792-908e-2373a0fb46ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf4667,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafeba39e3b59cbd5add9d51c5d00db4e34f037461c523c9850752780e351a5,PodSandboxId:c6eaa06d5fe4f9c8a0296d7edd99797f64a1621583e65bfd57387f2e5a65fac5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243121922813392,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2r4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554eb135-f1bf-46ce-bc40-1bc7c50c1ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c120d77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da95102d620f006616ce07ad435db9df3400569b1847d23e8780855ae4d12a27,PodSandboxId:00c77cf636ee4848489472a2f4ea51127db235e309b711d70a6f46650ffcd679,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243116305804084,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rnksx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3910eada-4775-4bac-a2e5-3141d64ee78b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e71b6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991ca9e1ad5bbe18493a37f09b1840e7b9d98cad8b41ef2ce45ad6b5e550acd2,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243104125843222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58bcea2f5188f34a740d38765d5751215f21fb8db09addbd9fecd090fddd970,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696243071843571568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b80bae0b515beb4e82b4b382bc6189b52e63cf01d9f71edd62378887850280d,PodSandboxId:f668edfcad99be86fee5ddd3b5a3602575f5bd0d310f7692ae095d8b9c682363,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169
be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696243071561944586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qh2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1fd7d4-e250-4d95-99df-7d4175f54858,},Annotations:map[string]string{io.kubernetes.container.hash: 827a3e27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8625273d8cd7402b4a3bae47a5263028ec06ffdfefa4428c20adabf8f69a7f,PodSandboxId:1cd1d68d8ddbe570f31385abddcffdea3bd9293b33a39e668f6690b22de5bcae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062697683676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g896d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 318da70f-2bf4-4f3e-abfc-448ce49880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3ddb9405,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09f1dc600b86be3f72a4eea3b758303ab0ad4df2af3dade909f649f3b80dd6d,PodSandboxId:af72878cda90307a599569f51449b89c1cab06af30ecde5430dc2b37e09b9853,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062504625533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hjmvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59316d1-1292-4d7d-a697-011ff752d5cb,},Annotations:map[string]string{io.kubernetes.container.hash: cb09aa01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38809c6c507d78
495696c5e4ba34eff4c7564a747344e9666548c214d49df6da,PodSandboxId:cd6c1d6966d983aca2d5554da78391d0e9c47f1cdd7b6a9b2898cf8da539b1ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696243034195614704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c3b9fe1dbc9b53b3b5db9e6bcc0c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6b81a09c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181c437daafe239289c1c9eae6a61615664b711a78f6349f6a079c585939749,PodS
andboxId:cacb122238b5c075c03b0ddc7c3801ab65c50c33c1505ac90c46bf971ec47772,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696243034092193701,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77471356828ddd4a2adeaaf4b71c35e,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082ae7a1010f41c415a422dce0484534fae375b92446239043043977951c5152,PodSandboxId:051120
3ca210194873049e300600429f43eeb94dbb4da4509fc6f3ce388242fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696243033853653348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a49f1e5f47c48cf3d595c740b7e7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 5166646c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d28064ec066d0466a4a66e5321069f108326729798d957f3a8e406094f7181,PodSandboxId:f6c737a49e58e4f65fd6d
43d4e603d43bbfe79f3039926e1acc32b3c6e29528e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696243033662074787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17fe5abaf14332d4269de9b95ac3c5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ef6657a-5318-4383-8efe-dced1a53574e name=/run
time.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.538577843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5d456cb6-61b3-4022-b834-10b5352f5af8 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.538678477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5d456cb6-61b3-4022-b834-10b5352f5af8 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.539947863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5cfc55a5-d76c-468c-8d73-4eb1c4050d93 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.541506814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696243335541491375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=5cfc55a5-d76c-468c-8d73-4eb1c4050d93 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.542851714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=af8c3ee3-3c55-4245-a9bf-5b62e054ce03 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.542983555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=af8c3ee3-3c55-4245-a9bf-5b62e054ce03 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.543353627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1b4ea1207f2a571aded6ed5252462bfdbc619edf2c551e376ea64e33372fb10,PodSandboxId:8afdf6e0add037940cc1f3177289caa6248302fcfe3eb1b32509d8df60efe608,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696243328658989312,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-vj6lt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee8e861f-e464-4c25-8b78-c36970ab4c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 8fba4429,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931189e4addfa4cad879a5743bb71f50ccb80ac5bb64993c491ac98fc7fafe76,PodSandboxId:d54d339b87dd0e6987f2b74f7cedc5ff6fcd3d1be7b38a2a2772c5b2f179bce3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243187447553390,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 022886fc-f9e9-4e77-ba67-52cd421e8921,},Annotations:map[string]string{io.kubernet
es.container.hash: 7f07a072,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350a2a51992904c6c9865b6b5f427be60b29a0a2e313ffff5b5cdb1aa29d324b,PodSandboxId:8e4bcd385d11e7d88dcf7616b922705c39176f47890f69a529c0cf2860a6172c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696243157283174454,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-ldbkr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0254fb09-ea15-4286-96e8-d8faaf78ebc9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d09cea8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b88554a18d5928e9df56421e6b140ca6b50bca1d82d73b749266ae5ee93943,PodSandboxId:5c1715ca6065b0542577eb32e58a7331614989ba3643e8cfa307f9069e785224,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696243143815983889,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-76vkr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4caa01fc-509b-4792-908e-2373a0fb46ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf4667,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafeba39e3b59cbd5add9d51c5d00db4e34f037461c523c9850752780e351a5,PodSandboxId:c6eaa06d5fe4f9c8a0296d7edd99797f64a1621583e65bfd57387f2e5a65fac5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243121922813392,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2r4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554eb135-f1bf-46ce-bc40-1bc7c50c1ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c120d77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da95102d620f006616ce07ad435db9df3400569b1847d23e8780855ae4d12a27,PodSandboxId:00c77cf636ee4848489472a2f4ea51127db235e309b711d70a6f46650ffcd679,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243116305804084,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rnksx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3910eada-4775-4bac-a2e5-3141d64ee78b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e71b6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991ca9e1ad5bbe18493a37f09b1840e7b9d98cad8b41ef2ce45ad6b5e550acd2,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243104125843222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58bcea2f5188f34a740d38765d5751215f21fb8db09addbd9fecd090fddd970,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696243071843571568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b80bae0b515beb4e82b4b382bc6189b52e63cf01d9f71edd62378887850280d,PodSandboxId:f668edfcad99be86fee5ddd3b5a3602575f5bd0d310f7692ae095d8b9c682363,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169
be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696243071561944586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qh2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1fd7d4-e250-4d95-99df-7d4175f54858,},Annotations:map[string]string{io.kubernetes.container.hash: 827a3e27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8625273d8cd7402b4a3bae47a5263028ec06ffdfefa4428c20adabf8f69a7f,PodSandboxId:1cd1d68d8ddbe570f31385abddcffdea3bd9293b33a39e668f6690b22de5bcae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062697683676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g896d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 318da70f-2bf4-4f3e-abfc-448ce49880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3ddb9405,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09f1dc600b86be3f72a4eea3b758303ab0ad4df2af3dade909f649f3b80dd6d,PodSandboxId:af72878cda90307a599569f51449b89c1cab06af30ecde5430dc2b37e09b9853,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062504625533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hjmvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59316d1-1292-4d7d-a697-011ff752d5cb,},Annotations:map[string]string{io.kubernetes.container.hash: cb09aa01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38809c6c507d78
495696c5e4ba34eff4c7564a747344e9666548c214d49df6da,PodSandboxId:cd6c1d6966d983aca2d5554da78391d0e9c47f1cdd7b6a9b2898cf8da539b1ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696243034195614704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c3b9fe1dbc9b53b3b5db9e6bcc0c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6b81a09c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181c437daafe239289c1c9eae6a61615664b711a78f6349f6a079c585939749,PodS
andboxId:cacb122238b5c075c03b0ddc7c3801ab65c50c33c1505ac90c46bf971ec47772,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696243034092193701,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77471356828ddd4a2adeaaf4b71c35e,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082ae7a1010f41c415a422dce0484534fae375b92446239043043977951c5152,PodSandboxId:051120
3ca210194873049e300600429f43eeb94dbb4da4509fc6f3ce388242fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696243033853653348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a49f1e5f47c48cf3d595c740b7e7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 5166646c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d28064ec066d0466a4a66e5321069f108326729798d957f3a8e406094f7181,PodSandboxId:f6c737a49e58e4f65fd6d
43d4e603d43bbfe79f3039926e1acc32b3c6e29528e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696243033662074787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17fe5abaf14332d4269de9b95ac3c5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=af8c3ee3-3c55-4245-a9bf-5b62e054ce03 name=/run
time.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.581556706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=93616a34-40c4-4c9a-b67f-2b8d5a95976d name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.581618570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=93616a34-40c4-4c9a-b67f-2b8d5a95976d name=/runtime.v1.RuntimeService/Version
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.582767294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ad72f649-8bc7-471c-a657-dab1b1d2d4ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.584218524Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696243335584199234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=ad72f649-8bc7-471c-a657-dab1b1d2d4ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.584946865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c4c8b27-bff7-4303-a11b-db8b6f4b2aea name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.585021787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c4c8b27-bff7-4303-a11b-db8b6f4b2aea name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:42:15 addons-304007 crio[716]: time="2023-10-02 10:42:15.585374104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1b4ea1207f2a571aded6ed5252462bfdbc619edf2c551e376ea64e33372fb10,PodSandboxId:8afdf6e0add037940cc1f3177289caa6248302fcfe3eb1b32509d8df60efe608,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696243328658989312,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-vj6lt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee8e861f-e464-4c25-8b78-c36970ab4c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 8fba4429,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931189e4addfa4cad879a5743bb71f50ccb80ac5bb64993c491ac98fc7fafe76,PodSandboxId:d54d339b87dd0e6987f2b74f7cedc5ff6fcd3d1be7b38a2a2772c5b2f179bce3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243187447553390,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 022886fc-f9e9-4e77-ba67-52cd421e8921,},Annotations:map[string]string{io.kubernet
es.container.hash: 7f07a072,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:350a2a51992904c6c9865b6b5f427be60b29a0a2e313ffff5b5cdb1aa29d324b,PodSandboxId:8e4bcd385d11e7d88dcf7616b922705c39176f47890f69a529c0cf2860a6172c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696243157283174454,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-ldbkr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0254fb09-ea15-4286-96e8-d8faaf78ebc9,},Annotations:map[string]string{io.kubernetes.container.hash: 6d09cea8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b88554a18d5928e9df56421e6b140ca6b50bca1d82d73b749266ae5ee93943,PodSandboxId:5c1715ca6065b0542577eb32e58a7331614989ba3643e8cfa307f9069e785224,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696243143815983889,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-76vkr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4caa01fc-509b-4792-908e-2373a0fb46ad,},Annotations:map[string]string{io.kubernetes.container.hash: 9baf4667,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafeba39e3b59cbd5add9d51c5d00db4e34f037461c523c9850752780e351a5,PodSandboxId:c6eaa06d5fe4f9c8a0296d7edd99797f64a1621583e65bfd57387f2e5a65fac5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243121922813392,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g2r4j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554eb135-f1bf-46ce-bc40-1bc7c50c1ec2,},Annotations:map[string]string{io.kubernetes.container.hash: c120d77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da95102d620f006616ce07ad435db9df3400569b1847d23e8780855ae4d12a27,PodSandboxId:00c77cf636ee4848489472a2f4ea51127db235e309b711d70a6f46650ffcd679,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696243116305804084,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rnksx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3910eada-4775-4bac-a2e5-3141d64ee78b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e71b6f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:991ca9e1ad5bbe18493a37f09b1840e7b9d98cad8b41ef2ce45ad6b5e550acd2,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243104125843222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c58bcea2f5188f34a740d38765d5751215f21fb8db09addbd9fecd090fddd970,PodSandboxId:bb9bb7ab51b584230d6042033d6307b0aa748df202cc48c712eb9474920413dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696243071843571568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac6f7688-724c-40be-a61d-5842c7561ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 86825993,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b80bae0b515beb4e82b4b382bc6189b52e63cf01d9f71edd62378887850280d,PodSandboxId:f668edfcad99be86fee5ddd3b5a3602575f5bd0d310f7692ae095d8b9c682363,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169
be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696243071561944586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qh2xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1fd7d4-e250-4d95-99df-7d4175f54858,},Annotations:map[string]string{io.kubernetes.container.hash: 827a3e27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e8625273d8cd7402b4a3bae47a5263028ec06ffdfefa4428c20adabf8f69a7f,PodSandboxId:1cd1d68d8ddbe570f31385abddcffdea3bd9293b33a39e668f6690b22de5bcae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ad
e8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062697683676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g896d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 318da70f-2bf4-4f3e-abfc-448ce49880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3ddb9405,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09f1dc600b86be3f72a4eea3b758303ab0ad4df2af3dade909f649f3b80dd6d,PodSandboxId:af72878cda90307a599569f51449b89c1cab06af30ecde5430dc2b37e09b9853,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696243062504625533,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hjmvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59316d1-1292-4d7d-a697-011ff752d5cb,},Annotations:map[string]string{io.kubernetes.container.hash: cb09aa01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38809c6c507d78
495696c5e4ba34eff4c7564a747344e9666548c214d49df6da,PodSandboxId:cd6c1d6966d983aca2d5554da78391d0e9c47f1cdd7b6a9b2898cf8da539b1ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696243034195614704,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9c3b9fe1dbc9b53b3b5db9e6bcc0c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6b81a09c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181c437daafe239289c1c9eae6a61615664b711a78f6349f6a079c585939749,PodS
andboxId:cacb122238b5c075c03b0ddc7c3801ab65c50c33c1505ac90c46bf971ec47772,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696243034092193701,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77471356828ddd4a2adeaaf4b71c35e,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:082ae7a1010f41c415a422dce0484534fae375b92446239043043977951c5152,PodSandboxId:051120
3ca210194873049e300600429f43eeb94dbb4da4509fc6f3ce388242fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696243033853653348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a49f1e5f47c48cf3d595c740b7e7d3,},Annotations:map[string]string{io.kubernetes.container.hash: 5166646c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d28064ec066d0466a4a66e5321069f108326729798d957f3a8e406094f7181,PodSandboxId:f6c737a49e58e4f65fd6d
43d4e603d43bbfe79f3039926e1acc32b3c6e29528e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696243033662074787,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-304007,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17fe5abaf14332d4269de9b95ac3c5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c4c8b27-bff7-4303-a11b-db8b6f4b2aea name=/run
time.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e1b4ea1207f2a       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      7 seconds ago       Running             hello-world-app           0                   8afdf6e0add03       hello-world-app-5d77478584-vj6lt
	931189e4addfa       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                              2 minutes ago       Running             nginx                     0                   d54d339b87dd0       nginx
	350a2a5199290       ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c                        2 minutes ago       Running             headlamp                  0                   8e4bcd385d11e       headlamp-58b88cff49-ldbkr
	f4b88554a18d5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   5c1715ca6065b       gcp-auth-d4c87556c-76vkr
	0cafeba39e3b5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   c6eaa06d5fe4f       ingress-nginx-admission-patch-g2r4j
	da95102d620f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   00c77cf636ee4       ingress-nginx-admission-create-rnksx
	991ca9e1ad5bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   bb9bb7ab51b58       storage-provisioner
	c58bcea2f5188       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   bb9bb7ab51b58       storage-provisioner
	1b80bae0b515b       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                                             4 minutes ago       Running             kube-proxy                0                   f668edfcad99b       kube-proxy-qh2xl
	0e8625273d8cd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   1cd1d68d8ddbe       coredns-5dd5756b68-g896d
	c09f1dc600b86       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   af72878cda903       coredns-5dd5756b68-hjmvh
	38809c6c507d7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   cd6c1d6966d98       etcd-addons-304007
	6181c437daafe       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                                             5 minutes ago       Running             kube-scheduler            0                   cacb122238b5c       kube-scheduler-addons-304007
	082ae7a1010f4       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                                             5 minutes ago       Running             kube-apiserver            0                   0511203ca2101       kube-apiserver-addons-304007
	e0d28064ec066       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                                             5 minutes ago       Running             kube-controller-manager   0                   f6c737a49e58e       kube-controller-manager-addons-304007
	
	* 
	* ==> coredns [0e8625273d8cd7402b4a3bae47a5263028ec06ffdfefa4428c20adabf8f69a7f] <==
	* linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57871 - 25398 "HINFO IN 4731821594613454541.3500410376124588349. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008864183s
	[INFO] 10.244.0.9:46481 - 61634 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155692s
	[INFO] 10.244.0.9:46481 - 35534 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000066627s
	[INFO] 10.244.0.9:33618 - 20976 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000202374s
	[INFO] 10.244.0.9:33618 - 46581 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000186176s
	[INFO] 10.244.0.9:38491 - 31776 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090442s
	[INFO] 10.244.0.9:38491 - 3106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000083756s
	[INFO] 10.244.0.9:53144 - 40463 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009645s
	[INFO] 10.244.0.9:53144 - 22032 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119195s
	[INFO] 10.244.0.9:41781 - 22626 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086042s
	[INFO] 10.244.0.9:41781 - 61541 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161852s
	[INFO] 10.244.0.9:40621 - 56612 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124432s
	[INFO] 10.244.0.9:40621 - 19035 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000241062s
	[INFO] 10.244.0.20:60704 - 17638 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000358062s
	[INFO] 10.244.0.20:40529 - 11310 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00015156s
	[INFO] 10.244.0.20:47530 - 22890 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014928s
	[INFO] 10.244.0.20:59109 - 29948 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000785858s
	[INFO] 10.244.0.24:47361 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000261939s
	
	* 
	* ==> coredns [c09f1dc600b86be3f72a4eea3b758303ab0ad4df2af3dade909f649f3b80dd6d] <==
	* [INFO] 10.244.0.9:44108 - 12162 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074518s
	[INFO] 10.244.0.9:44108 - 28032 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107024s
	[INFO] 10.244.0.9:49033 - 29830 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145987s
	[INFO] 10.244.0.9:49033 - 38020 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133023s
	[INFO] 10.244.0.9:57053 - 34628 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080217s
	[INFO] 10.244.0.9:57053 - 20032 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041731s
	[INFO] 10.244.0.9:37122 - 53052 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018082s
	[INFO] 10.244.0.9:37122 - 41266 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023981s
	[INFO] 10.244.0.9:51631 - 42357 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005931s
	[INFO] 10.244.0.9:51631 - 43383 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043524s
	[INFO] 10.244.0.9:36099 - 25516 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044206s
	[INFO] 10.244.0.9:36099 - 30378 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000027931s
	[INFO] 10.244.0.9:59276 - 33581 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113027s
	[INFO] 10.244.0.9:59276 - 38185 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129786s
	[INFO] 10.244.0.9:40527 - 255 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080906s
	[INFO] 10.244.0.9:40527 - 24316 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107357s
	[INFO] 10.244.0.9:49268 - 18350 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088722s
	[INFO] 10.244.0.9:49268 - 59564 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038114s
	[INFO] 10.244.0.9:50368 - 47103 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065922s
	[INFO] 10.244.0.9:50368 - 30913 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029337s
	[INFO] 10.244.0.20:48342 - 19102 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000255398s
	[INFO] 10.244.0.20:60832 - 43853 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129873s
	[INFO] 10.244.0.20:38425 - 45157 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095776s
	[INFO] 10.244.0.20:48358 - 31046 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000746516s
	[INFO] 10.244.0.24:54759 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000287522s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-304007
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-304007
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=addons-304007
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T10_37_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-304007
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:37:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-304007
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:42:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:39:54 +0000   Mon, 02 Oct 2023 10:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:39:54 +0000   Mon, 02 Oct 2023 10:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:39:54 +0000   Mon, 02 Oct 2023 10:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:39:54 +0000   Mon, 02 Oct 2023 10:37:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    addons-304007
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd8d4691087849f48c635b1853488fef
	  System UUID:                dd8d4691-0878-49f4-8c63-5b1853488fef
	  Boot ID:                    e084c2fb-7e9e-4c57-9615-84b039090bc4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-vj6lt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  gcp-auth                    gcp-auth-d4c87556c-76vkr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  headlamp                    headlamp-58b88cff49-ldbkr                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 coredns-5dd5756b68-g896d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m40s
	  kube-system                 coredns-5dd5756b68-hjmvh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m41s
	  kube-system                 etcd-addons-304007                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-apiserver-addons-304007             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-addons-304007    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-proxy-qh2xl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-scheduler-addons-304007             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             240Mi (6%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m21s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node addons-304007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node addons-304007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)  kubelet          Node addons-304007 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s                kubelet          Node addons-304007 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s                kubelet          Node addons-304007 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s                kubelet          Node addons-304007 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m54s                kubelet          Node addons-304007 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                node-controller  Node addons-304007 event: Registered Node addons-304007 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.153526] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.047715] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 2 10:37] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.100992] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.137744] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.103894] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.204671] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +10.037196] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[  +8.757308] systemd-fstab-generator[1252]: Ignoring "noauto" for root device
	[ +21.521165] kauditd_printk_skb: 34 callbacks suppressed
	[ +12.392036] kauditd_printk_skb: 35 callbacks suppressed
	[Oct 2 10:38] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.788394] kauditd_printk_skb: 14 callbacks suppressed
	[ +33.631896] kauditd_printk_skb: 10 callbacks suppressed
	[Oct 2 10:39] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.043304] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.468567] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.469802] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.699012] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.022425] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 2 10:40] kauditd_printk_skb: 12 callbacks suppressed
	[Oct 2 10:42] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [38809c6c507d78495696c5e4ba34eff4c7564a747344e9666548c214d49df6da] <==
	* {"level":"warn","ts":"2023-10-02T10:39:13.155287Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.696652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-d402bdb5-3384-475e-b837-b98b15392ced\" ","response":"range_response_count:1 size:4256"}
	{"level":"info","ts":"2023-10-02T10:39:13.155302Z","caller":"traceutil/trace.go:171","msg":"trace[1432306419] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-d402bdb5-3384-475e-b837-b98b15392ced; range_end:; response_count:1; response_revision:1209; }","duration":"373.711994ms","start":"2023-10-02T10:39:12.781585Z","end":"2023-10-02T10:39:13.155297Z","steps":["trace[1432306419] 'agreement among raft nodes before linearized reading'  (duration: 373.680256ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:13.155363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T10:39:12.781572Z","time spent":"373.783813ms","remote":"127.0.0.1:50878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4279,"request content":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-d402bdb5-3384-475e-b837-b98b15392ced\" "}
	{"level":"warn","ts":"2023-10-02T10:39:13.155584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.159781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-02T10:39:13.155632Z","caller":"traceutil/trace.go:171","msg":"trace[1985480936] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1209; }","duration":"106.208409ms","start":"2023-10-02T10:39:13.049416Z","end":"2023-10-02T10:39:13.155624Z","steps":["trace[1985480936] 'agreement among raft nodes before linearized reading'  (duration: 106.131081ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:13.157003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.315075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-10-02T10:39:13.157029Z","caller":"traceutil/trace.go:171","msg":"trace[1487156249] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1209; }","duration":"118.347948ms","start":"2023-10-02T10:39:13.038675Z","end":"2023-10-02T10:39:13.157023Z","steps":["trace[1487156249] 'agreement among raft nodes before linearized reading'  (duration: 118.287338ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:13.15712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.925727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-02T10:39:13.157133Z","caller":"traceutil/trace.go:171","msg":"trace[763547921] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1209; }","duration":"286.938941ms","start":"2023-10-02T10:39:12.870189Z","end":"2023-10-02T10:39:13.157128Z","steps":["trace[763547921] 'agreement among raft nodes before linearized reading'  (duration: 286.914795ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:13.157234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.96909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpathplugin-s5j7g\" ","response":"range_response_count:1 size:12704"}
	{"level":"info","ts":"2023-10-02T10:39:13.157248Z","caller":"traceutil/trace.go:171","msg":"trace[892723817] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpathplugin-s5j7g; range_end:; response_count:1; response_revision:1209; }","duration":"302.983388ms","start":"2023-10-02T10:39:12.85426Z","end":"2023-10-02T10:39:13.157244Z","steps":["trace[892723817] 'agreement among raft nodes before linearized reading'  (duration: 302.936676ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:13.157261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T10:39:12.854247Z","time spent":"303.010819ms","remote":"127.0.0.1:50878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":12727,"request content":"key:\"/registry/pods/kube-system/csi-hostpathplugin-s5j7g\" "}
	{"level":"info","ts":"2023-10-02T10:39:14.423175Z","caller":"traceutil/trace.go:171","msg":"trace[1999924673] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"119.375613ms","start":"2023-10-02T10:39:14.303783Z","end":"2023-10-02T10:39:14.423158Z","steps":["trace[1999924673] 'process raft request'  (duration: 119.208212ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T10:39:14.423788Z","caller":"traceutil/trace.go:171","msg":"trace[721803636] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"126.605229ms","start":"2023-10-02T10:39:14.297173Z","end":"2023-10-02T10:39:14.423779Z","steps":["trace[721803636] 'process raft request'  (duration: 124.463566ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:17.133301Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.846615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9108"}
	{"level":"info","ts":"2023-10-02T10:39:17.133416Z","caller":"traceutil/trace.go:171","msg":"trace[12973906] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1246; }","duration":"391.015866ms","start":"2023-10-02T10:39:16.742388Z","end":"2023-10-02T10:39:17.133404Z","steps":["trace[12973906] 'range keys from in-memory index tree'  (duration: 390.711977ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:17.133513Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T10:39:16.742374Z","time spent":"391.125729ms","remote":"127.0.0.1:50878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":9131,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-10-02T10:39:17.133715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"327.08876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-02T10:39:17.133775Z","caller":"traceutil/trace.go:171","msg":"trace[1217782166] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1246; }","duration":"327.153604ms","start":"2023-10-02T10:39:16.806615Z","end":"2023-10-02T10:39:17.133768Z","steps":["trace[1217782166] 'count revisions from in-memory index tree'  (duration: 327.012791ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:17.133815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T10:39:16.806602Z","time spent":"327.206124ms","remote":"127.0.0.1:50884","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":30,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true "}
	{"level":"warn","ts":"2023-10-02T10:39:17.134071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.421187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-02T10:39:17.13415Z","caller":"traceutil/trace.go:171","msg":"trace[511699762] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:1246; }","duration":"299.506405ms","start":"2023-10-02T10:39:16.834637Z","end":"2023-10-02T10:39:17.134144Z","steps":["trace[511699762] 'count revisions from in-memory index tree'  (duration: 299.357729ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T10:39:17.134499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.174983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-02T10:39:17.134575Z","caller":"traceutil/trace.go:171","msg":"trace[1060867756] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1246; }","duration":"263.22697ms","start":"2023-10-02T10:39:16.871315Z","end":"2023-10-02T10:39:17.134542Z","steps":["trace[1060867756] 'range keys from in-memory index tree'  (duration: 263.110302ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T10:40:12.768146Z","caller":"traceutil/trace.go:171","msg":"trace[800059416] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"169.615591ms","start":"2023-10-02T10:40:12.598499Z","end":"2023-10-02T10:40:12.768114Z","steps":["trace[800059416] 'process raft request'  (duration: 169.195656ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [f4b88554a18d5928e9df56421e6b140ca6b50bca1d82d73b749266ae5ee93943] <==
	* 2023/10/02 10:39:03 GCP Auth Webhook started!
	2023/10/02 10:39:05 Ready to marshal response ...
	2023/10/02 10:39:05 Ready to write response ...
	2023/10/02 10:39:05 Ready to marshal response ...
	2023/10/02 10:39:05 Ready to write response ...
	2023/10/02 10:39:05 Ready to marshal response ...
	2023/10/02 10:39:05 Ready to write response ...
	2023/10/02 10:39:05 Ready to marshal response ...
	2023/10/02 10:39:05 Ready to write response ...
	2023/10/02 10:39:05 Ready to marshal response ...
	2023/10/02 10:39:05 Ready to write response ...
	2023/10/02 10:39:14 Ready to marshal response ...
	2023/10/02 10:39:14 Ready to write response ...
	2023/10/02 10:39:23 Ready to marshal response ...
	2023/10/02 10:39:23 Ready to write response ...
	2023/10/02 10:39:29 Ready to marshal response ...
	2023/10/02 10:39:29 Ready to write response ...
	2023/10/02 10:39:33 Ready to marshal response ...
	2023/10/02 10:39:33 Ready to write response ...
	2023/10/02 10:39:36 Ready to marshal response ...
	2023/10/02 10:39:36 Ready to write response ...
	2023/10/02 10:40:10 Ready to marshal response ...
	2023/10/02 10:40:10 Ready to write response ...
	2023/10/02 10:42:05 Ready to marshal response ...
	2023/10/02 10:42:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:42:16 up 5 min,  0 users,  load average: 0.73, 1.73, 0.92
	Linux addons-304007 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [082ae7a1010f41c415a422dce0484534fae375b92446239043043977951c5152] <==
	* I1002 10:39:30.749783       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1002 10:39:31.780534       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1002 10:39:34.505309       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.235:8443->10.244.0.26:37810: read: connection reset by peer
	I1002 10:39:36.309412       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 10:39:36.639316       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.238.144"}
	E1002 10:39:39.285569       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 10:39:50.347462       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 10:40:27.999612       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:27.999684       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.007290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.007391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.023921       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.024031       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.039569       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.039806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.068951       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.069055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.070478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.070556       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 10:40:28.089236       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 10:40:28.089381       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 10:40:29.040283       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 10:40:29.090520       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 10:40:29.095788       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1002 10:42:05.545352       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.15.223"}
	
	* 
	* ==> kube-controller-manager [e0d28064ec066d0466a4a66e5321069f108326729798d957f3a8e406094f7181] <==
	* W1002 10:41:00.778397       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:00.778426       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:41:07.646529       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:07.646798       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:41:10.774178       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:10.774277       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:41:44.080978       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:44.081268       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:41:46.902748       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:46.902772       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:41:55.335950       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:41:55.336097       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 10:42:01.078146       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 10:42:01.078303       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 10:42:05.270185       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1002 10:42:05.317068       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-vj6lt"
	I1002 10:42:05.329135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.977381ms"
	I1002 10:42:05.369271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.068647ms"
	I1002 10:42:05.369374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.202µs"
	I1002 10:42:05.386695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.232µs"
	I1002 10:42:07.569786       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1002 10:42:07.577367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-f6b66b4b9" duration="9.437µs"
	I1002 10:42:07.584012       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1002 10:42:08.846427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.756083ms"
	I1002 10:42:08.846632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.469µs"
	
	* 
	* ==> kube-proxy [1b80bae0b515beb4e82b4b382bc6189b52e63cf01d9f71edd62378887850280d] <==
	* I1002 10:37:53.640280       1 server_others.go:69] "Using iptables proxy"
	I1002 10:37:53.893645       1 node.go:141] Successfully retrieved node IP: 192.168.39.235
	I1002 10:37:54.532167       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 10:37:54.532236       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 10:37:54.601397       1 server_others.go:152] "Using iptables Proxier"
	I1002 10:37:54.601536       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 10:37:54.601929       1 server.go:846] "Version info" version="v1.28.2"
	I1002 10:37:54.602194       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 10:37:54.626693       1 config.go:188] "Starting service config controller"
	I1002 10:37:54.690975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 10:37:54.639142       1 config.go:97] "Starting endpoint slice config controller"
	I1002 10:37:54.691781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 10:37:54.646711       1 config.go:315] "Starting node config controller"
	I1002 10:37:54.692018       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 10:37:54.794982       1 shared_informer.go:318] Caches are synced for node config
	I1002 10:37:54.795118       1 shared_informer.go:318] Caches are synced for service config
	I1002 10:37:54.795152       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [6181c437daafe239289c1c9eae6a61615664b711a78f6349f6a079c585939749] <==
	* W1002 10:37:18.750390       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:37:18.750449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 10:37:18.846027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 10:37:18.846110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 10:37:19.000222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:37:19.000287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 10:37:19.008309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:37:19.008336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 10:37:19.038614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:37:19.038667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 10:37:19.060291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:37:19.060352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 10:37:19.063143       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:37:19.063191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 10:37:19.132459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:37:19.132514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 10:37:19.159095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 10:37:19.159151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 10:37:19.174024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:37:19.174144       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 10:37:19.183970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:37:19.184018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 10:37:19.488471       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:37:19.488557       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 10:37:22.384523       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 10:36:44 UTC, ends at Mon 2023-10-02 10:42:16 UTC. --
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.331805    1259 memory_manager.go:346] "RemoveStaleState removing state" podUID="dd845b1f-dd3a-4235-b1a3-e1009cad69d7" containerName="csi-external-health-monitor-controller"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.331815    1259 memory_manager.go:346] "RemoveStaleState removing state" podUID="dd845b1f-dd3a-4235-b1a3-e1009cad69d7" containerName="liveness-probe"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.331825    1259 memory_manager.go:346] "RemoveStaleState removing state" podUID="dd845b1f-dd3a-4235-b1a3-e1009cad69d7" containerName="hostpath"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.331830    1259 memory_manager.go:346] "RemoveStaleState removing state" podUID="dd845b1f-dd3a-4235-b1a3-e1009cad69d7" containerName="csi-snapshotter"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.331837    1259 memory_manager.go:346] "RemoveStaleState removing state" podUID="7dd2fabf-d2cd-484d-8643-189ffff274a7" containerName="volume-snapshot-controller"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.420531    1259 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrrq\" (UniqueName: \"kubernetes.io/projected/ee8e861f-e464-4c25-8b78-c36970ab4c9f-kube-api-access-twrrq\") pod \"hello-world-app-5d77478584-vj6lt\" (UID: \"ee8e861f-e464-4c25-8b78-c36970ab4c9f\") " pod="default/hello-world-app-5d77478584-vj6lt"
	Oct 02 10:42:05 addons-304007 kubelet[1259]: I1002 10:42:05.420592    1259 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ee8e861f-e464-4c25-8b78-c36970ab4c9f-gcp-creds\") pod \"hello-world-app-5d77478584-vj6lt\" (UID: \"ee8e861f-e464-4c25-8b78-c36970ab4c9f\") " pod="default/hello-world-app-5d77478584-vj6lt"
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.732143    1259 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqwwh\" (UniqueName: \"kubernetes.io/projected/db5bb398-b6a5-499a-93a1-d21f68e99dd6-kube-api-access-sqwwh\") pod \"db5bb398-b6a5-499a-93a1-d21f68e99dd6\" (UID: \"db5bb398-b6a5-499a-93a1-d21f68e99dd6\") "
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.737791    1259 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5bb398-b6a5-499a-93a1-d21f68e99dd6-kube-api-access-sqwwh" (OuterVolumeSpecName: "kube-api-access-sqwwh") pod "db5bb398-b6a5-499a-93a1-d21f68e99dd6" (UID: "db5bb398-b6a5-499a-93a1-d21f68e99dd6"). InnerVolumeSpecName "kube-api-access-sqwwh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.808334    1259 scope.go:117] "RemoveContainer" containerID="af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35"
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.834696    1259 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sqwwh\" (UniqueName: \"kubernetes.io/projected/db5bb398-b6a5-499a-93a1-d21f68e99dd6-kube-api-access-sqwwh\") on node \"addons-304007\" DevicePath \"\""
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.847566    1259 scope.go:117] "RemoveContainer" containerID="af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35"
	Oct 02 10:42:06 addons-304007 kubelet[1259]: E1002 10:42:06.848920    1259 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35\": container with ID starting with af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35 not found: ID does not exist" containerID="af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35"
	Oct 02 10:42:06 addons-304007 kubelet[1259]: I1002 10:42:06.849166    1259 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35"} err="failed to get container status \"af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35\": rpc error: code = NotFound desc = could not find container \"af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35\": container with ID starting with af1033451ed79a2103b2bcdb1e9185425d8447c71d04e7689063f1a62fba6c35 not found: ID does not exist"
	Oct 02 10:42:07 addons-304007 kubelet[1259]: I1002 10:42:07.228476    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="db5bb398-b6a5-499a-93a1-d21f68e99dd6" path="/var/lib/kubelet/pods/db5bb398-b6a5-499a-93a1-d21f68e99dd6/volumes"
	Oct 02 10:42:09 addons-304007 kubelet[1259]: I1002 10:42:09.229306    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3910eada-4775-4bac-a2e5-3141d64ee78b" path="/var/lib/kubelet/pods/3910eada-4775-4bac-a2e5-3141d64ee78b/volumes"
	Oct 02 10:42:09 addons-304007 kubelet[1259]: I1002 10:42:09.229705    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="554eb135-f1bf-46ce-bc40-1bc7c50c1ec2" path="/var/lib/kubelet/pods/554eb135-f1bf-46ce-bc40-1bc7c50c1ec2/volumes"
	Oct 02 10:42:10 addons-304007 kubelet[1259]: I1002 10:42:10.965784    1259 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gqf\" (UniqueName: \"kubernetes.io/projected/71de1c02-88a9-48f1-9472-a5a9d093197a-kube-api-access-c9gqf\") pod \"71de1c02-88a9-48f1-9472-a5a9d093197a\" (UID: \"71de1c02-88a9-48f1-9472-a5a9d093197a\") "
	Oct 02 10:42:10 addons-304007 kubelet[1259]: I1002 10:42:10.965839    1259 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71de1c02-88a9-48f1-9472-a5a9d093197a-webhook-cert\") pod \"71de1c02-88a9-48f1-9472-a5a9d093197a\" (UID: \"71de1c02-88a9-48f1-9472-a5a9d093197a\") "
	Oct 02 10:42:10 addons-304007 kubelet[1259]: I1002 10:42:10.968791    1259 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71de1c02-88a9-48f1-9472-a5a9d093197a-kube-api-access-c9gqf" (OuterVolumeSpecName: "kube-api-access-c9gqf") pod "71de1c02-88a9-48f1-9472-a5a9d093197a" (UID: "71de1c02-88a9-48f1-9472-a5a9d093197a"). InnerVolumeSpecName "kube-api-access-c9gqf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 10:42:10 addons-304007 kubelet[1259]: I1002 10:42:10.971051    1259 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71de1c02-88a9-48f1-9472-a5a9d093197a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "71de1c02-88a9-48f1-9472-a5a9d093197a" (UID: "71de1c02-88a9-48f1-9472-a5a9d093197a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:42:11 addons-304007 kubelet[1259]: I1002 10:42:11.066520    1259 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c9gqf\" (UniqueName: \"kubernetes.io/projected/71de1c02-88a9-48f1-9472-a5a9d093197a-kube-api-access-c9gqf\") on node \"addons-304007\" DevicePath \"\""
	Oct 02 10:42:11 addons-304007 kubelet[1259]: I1002 10:42:11.066609    1259 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71de1c02-88a9-48f1-9472-a5a9d093197a-webhook-cert\") on node \"addons-304007\" DevicePath \"\""
	Oct 02 10:42:11 addons-304007 kubelet[1259]: I1002 10:42:11.229237    1259 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="71de1c02-88a9-48f1-9472-a5a9d093197a" path="/var/lib/kubelet/pods/71de1c02-88a9-48f1-9472-a5a9d093197a/volumes"
	Oct 02 10:42:11 addons-304007 kubelet[1259]: I1002 10:42:11.848795    1259 scope.go:117] "RemoveContainer" containerID="9e20b720cbb45e1584b9bed831ccd0db120235e112bd014d42fe5b25c61013e3"
	
	* 
	* ==> storage-provisioner [991ca9e1ad5bbe18493a37f09b1840e7b9d98cad8b41ef2ce45ad6b5e550acd2] <==
	* I1002 10:38:24.496004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:38:24.508346       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:38:24.508447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:38:24.518302       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:38:24.519242       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-304007_388fb620-5734-4ec0-9d52-e6a3c4d39019!
	I1002 10:38:24.519966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4087c8c6-9a16-4be3-aa70-534c3fc618d3", APIVersion:"v1", ResourceVersion:"935", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-304007_388fb620-5734-4ec0-9d52-e6a3c4d39019 became leader
	I1002 10:38:24.660969       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-304007_388fb620-5734-4ec0-9d52-e6a3c4d39019!
	
	* 
	* ==> storage-provisioner [c58bcea2f5188f34a740d38765d5751215f21fb8db09addbd9fecd090fddd970] <==
	* I1002 10:37:53.295296       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 10:38:23.297258       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-304007 -n addons-304007
helpers_test.go:261: (dbg) Run:  kubectl --context addons-304007 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (160.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.5s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-304007
addons_test.go:150: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-304007: exit status 82 (2m1.644880823s)

                                                
                                                
-- stdout --
	* Stopping node "addons-304007"  ...
	* Stopping node "addons-304007"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:152: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-304007" : exit status 82
addons_test.go:154: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-304007
addons_test.go:154: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-304007: exit status 11 (21.568619481s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:156: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-304007" : exit status 11
addons_test.go:158: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-304007
addons_test.go:158: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-304007: exit status 11 (6.143233207s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:160: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-304007" : exit status 11
addons_test.go:163: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-304007
addons_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-304007: exit status 11 (6.144598346s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:165: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-304007" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (173.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-982656 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-982656 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.149499235s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-982656 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-982656 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2ab6e312-e751-4154-896a-2c3458adbf4a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2ab6e312-e751-4154-896a-2c3458adbf4a] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.011889457s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1002 10:54:04.538912  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:54:14.660407  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.665702  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.676008  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.696339  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.736643  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.817025  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:14.977569  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:15.298252  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:15.939309  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:17.219820  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:19.780780  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:24.901754  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:54:32.222552  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:54:35.141939  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-982656 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.961794514s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-982656 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.231
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons disable ingress-dns --alsologtostderr -v=1: (2.709864148s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons disable ingress --alsologtostderr -v=1: (7.827694551s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-982656 -n ingress-addon-legacy-982656
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-982656 logs -n 25: (1.128717152s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-250301                                                   | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-250301 ssh findmnt                                          | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-250301                                                   | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-250301 ssh findmnt                                          | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-250301 ssh findmnt                                          | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-250301 ssh findmnt                                          | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-250301                                                   | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-250301 ssh pgrep                                            | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-250301 image build -t                                       | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | localhost/my-image:functional-250301                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-250301 image ls                                             | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	| image          | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-250301                                                      | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:49 UTC | 02 Oct 23 10:49 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-250301                                                   | functional-250301           | jenkins | v1.31.2 | 02 Oct 23 10:50 UTC | 02 Oct 23 10:50 UTC |
	| start          | -p ingress-addon-legacy-982656                                         | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:50 UTC | 02 Oct 23 10:51 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-982656                                            | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:51 UTC | 02 Oct 23 10:51 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-982656                                            | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:51 UTC | 02 Oct 23 10:51 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-982656                                            | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:52 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-982656 ip                                         | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:54 UTC | 02 Oct 23 10:54 UTC |
	| addons         | ingress-addon-legacy-982656                                            | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:54 UTC | 02 Oct 23 10:54 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-982656                                            | ingress-addon-legacy-982656 | jenkins | v1.31.2 | 02 Oct 23 10:54 UTC | 02 Oct 23 10:54 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:50:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:50:16.990522  348427 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:50:16.990651  348427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:50:16.990661  348427 out.go:309] Setting ErrFile to fd 2...
	I1002 10:50:16.990665  348427 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:50:16.990897  348427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 10:50:16.991579  348427 out.go:303] Setting JSON to false
	I1002 10:50:16.992986  348427 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5563,"bootTime":1696238254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:50:16.993168  348427 start.go:138] virtualization: kvm guest
	I1002 10:50:16.995622  348427 out.go:177] * [ingress-addon-legacy-982656] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:50:16.997219  348427 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:50:16.998628  348427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:50:16.997150  348427 notify.go:220] Checking for updates...
	I1002 10:50:17.000109  348427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:50:17.001677  348427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:50:17.003212  348427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 10:50:17.004800  348427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:50:17.006438  348427 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:50:17.040988  348427 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 10:50:17.042348  348427 start.go:298] selected driver: kvm2
	I1002 10:50:17.042375  348427 start.go:902] validating driver "kvm2" against <nil>
	I1002 10:50:17.042389  348427 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:50:17.043103  348427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:50:17.043189  348427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 10:50:17.058175  348427 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 10:50:17.058243  348427 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:50:17.058502  348427 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:50:17.058538  348427 cni.go:84] Creating CNI manager for ""
	I1002 10:50:17.058548  348427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:50:17.058559  348427 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 10:50:17.058568  348427 start_flags.go:321] config:
	{Name:ingress-addon-legacy-982656 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-982656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:50:17.058697  348427 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:50:17.060524  348427 out.go:177] * Starting control plane node ingress-addon-legacy-982656 in cluster ingress-addon-legacy-982656
	I1002 10:50:17.061934  348427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 10:50:17.569189  348427 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1002 10:50:17.569260  348427 cache.go:57] Caching tarball of preloaded images
	I1002 10:50:17.569425  348427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 10:50:17.571329  348427 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 10:50:17.572706  348427 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:50:17.689983  348427 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1002 10:50:32.317589  348427 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:50:32.317706  348427 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:50:33.319891  348427 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1002 10:50:33.320329  348427 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/config.json ...
	I1002 10:50:33.320409  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/config.json: {Name:mkbf1f458b5a94264c61fdf84eb516c9b32f706d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:50:33.320629  348427 start.go:365] acquiring machines lock for ingress-addon-legacy-982656: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 10:50:33.320684  348427 start.go:369] acquired machines lock for "ingress-addon-legacy-982656" in 31.373µs
	I1002 10:50:33.320713  348427 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-982656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-982656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 10:50:33.320798  348427 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 10:50:33.322920  348427 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 10:50:33.323097  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:50:33.323145  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:50:33.337365  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36169
	I1002 10:50:33.337837  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:50:33.338423  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:50:33.338445  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:50:33.338811  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:50:33.339000  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetMachineName
	I1002 10:50:33.339143  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:33.339295  348427 start.go:159] libmachine.API.Create for "ingress-addon-legacy-982656" (driver="kvm2")
	I1002 10:50:33.339318  348427 client.go:168] LocalClient.Create starting
	I1002 10:50:33.339345  348427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 10:50:33.339377  348427 main.go:141] libmachine: Decoding PEM data...
	I1002 10:50:33.339394  348427 main.go:141] libmachine: Parsing certificate...
	I1002 10:50:33.339448  348427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 10:50:33.339467  348427 main.go:141] libmachine: Decoding PEM data...
	I1002 10:50:33.339478  348427 main.go:141] libmachine: Parsing certificate...
	I1002 10:50:33.339496  348427 main.go:141] libmachine: Running pre-create checks...
	I1002 10:50:33.339508  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .PreCreateCheck
	I1002 10:50:33.339849  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetConfigRaw
	I1002 10:50:33.340219  348427 main.go:141] libmachine: Creating machine...
	I1002 10:50:33.340234  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Create
	I1002 10:50:33.340340  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Creating KVM machine...
	I1002 10:50:33.341483  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found existing default KVM network
	I1002 10:50:33.342154  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:33.342015  348486 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a00}
	I1002 10:50:33.347457  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | trying to create private KVM network mk-ingress-addon-legacy-982656 192.168.39.0/24...
	I1002 10:50:33.415711  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | private KVM network mk-ingress-addon-legacy-982656 192.168.39.0/24 created
	I1002 10:50:33.415750  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:33.415682  348486 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:50:33.415771  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656 ...
	I1002 10:50:33.415791  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 10:50:33.415849  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 10:50:33.648440  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:33.648251  348486 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa...
	I1002 10:50:33.853656  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:33.853511  348486 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/ingress-addon-legacy-982656.rawdisk...
	I1002 10:50:33.853693  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Writing magic tar header
	I1002 10:50:33.853716  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Writing SSH key tar header
	I1002 10:50:33.853731  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:33.853637  348486 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656 ...
	I1002 10:50:33.853751  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656
	I1002 10:50:33.853782  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 10:50:33.853816  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656 (perms=drwx------)
	I1002 10:50:33.853830  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:50:33.853842  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 10:50:33.853851  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 10:50:33.853860  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home/jenkins
	I1002 10:50:33.853869  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 10:50:33.853879  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Checking permissions on dir: /home
	I1002 10:50:33.853887  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Skipping /home - not owner
	I1002 10:50:33.853896  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 10:50:33.853908  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 10:50:33.853918  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 10:50:33.853925  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 10:50:33.853935  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Creating domain...
	I1002 10:50:33.855103  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) define libvirt domain using xml: 
	I1002 10:50:33.855136  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) <domain type='kvm'>
	I1002 10:50:33.855150  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <name>ingress-addon-legacy-982656</name>
	I1002 10:50:33.855168  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <memory unit='MiB'>4096</memory>
	I1002 10:50:33.855203  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <vcpu>2</vcpu>
	I1002 10:50:33.855222  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <features>
	I1002 10:50:33.855236  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <acpi/>
	I1002 10:50:33.855249  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <apic/>
	I1002 10:50:33.855264  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <pae/>
	I1002 10:50:33.855272  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     
	I1002 10:50:33.855279  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   </features>
	I1002 10:50:33.855288  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <cpu mode='host-passthrough'>
	I1002 10:50:33.855294  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   
	I1002 10:50:33.855301  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   </cpu>
	I1002 10:50:33.855308  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <os>
	I1002 10:50:33.855321  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <type>hvm</type>
	I1002 10:50:33.855330  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <boot dev='cdrom'/>
	I1002 10:50:33.855338  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <boot dev='hd'/>
	I1002 10:50:33.855348  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <bootmenu enable='no'/>
	I1002 10:50:33.855353  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   </os>
	I1002 10:50:33.855362  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   <devices>
	I1002 10:50:33.855368  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <disk type='file' device='cdrom'>
	I1002 10:50:33.855381  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/boot2docker.iso'/>
	I1002 10:50:33.855389  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <target dev='hdc' bus='scsi'/>
	I1002 10:50:33.855396  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <readonly/>
	I1002 10:50:33.855411  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </disk>
	I1002 10:50:33.855420  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <disk type='file' device='disk'>
	I1002 10:50:33.855428  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 10:50:33.855441  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/ingress-addon-legacy-982656.rawdisk'/>
	I1002 10:50:33.855449  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <target dev='hda' bus='virtio'/>
	I1002 10:50:33.855456  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </disk>
	I1002 10:50:33.855465  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <interface type='network'>
	I1002 10:50:33.855473  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <source network='mk-ingress-addon-legacy-982656'/>
	I1002 10:50:33.855484  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <model type='virtio'/>
	I1002 10:50:33.855493  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </interface>
	I1002 10:50:33.855499  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <interface type='network'>
	I1002 10:50:33.855508  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <source network='default'/>
	I1002 10:50:33.855513  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <model type='virtio'/>
	I1002 10:50:33.855522  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </interface>
	I1002 10:50:33.855531  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <serial type='pty'>
	I1002 10:50:33.855539  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <target port='0'/>
	I1002 10:50:33.855550  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </serial>
	I1002 10:50:33.855564  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <console type='pty'>
	I1002 10:50:33.855571  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <target type='serial' port='0'/>
	I1002 10:50:33.855578  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </console>
	I1002 10:50:33.855585  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     <rng model='virtio'>
	I1002 10:50:33.855593  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)       <backend model='random'>/dev/random</backend>
	I1002 10:50:33.855600  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     </rng>
	I1002 10:50:33.855606  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     
	I1002 10:50:33.855616  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)     
	I1002 10:50:33.855625  348427 main.go:141] libmachine: (ingress-addon-legacy-982656)   </devices>
	I1002 10:50:33.855630  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) </domain>
	I1002 10:50:33.855640  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) 
	I1002 10:50:33.860218  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:c6:9f:30 in network default
	I1002 10:50:33.860818  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Ensuring networks are active...
	I1002 10:50:33.860880  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:33.861509  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Ensuring network default is active
	I1002 10:50:33.861828  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Ensuring network mk-ingress-addon-legacy-982656 is active
	I1002 10:50:33.862342  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Getting domain xml...
	I1002 10:50:33.863034  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Creating domain...
	I1002 10:50:35.083606  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Waiting to get IP...
	I1002 10:50:35.084328  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.084701  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.084741  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:35.084691  348486 retry.go:31] will retry after 231.41071ms: waiting for machine to come up
	I1002 10:50:35.318224  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.318728  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.318766  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:35.318696  348486 retry.go:31] will retry after 301.637945ms: waiting for machine to come up
	I1002 10:50:35.622205  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.622813  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.622856  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:35.622760  348486 retry.go:31] will retry after 369.82912ms: waiting for machine to come up
	I1002 10:50:35.994310  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.994758  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:35.994788  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:35.994702  348486 retry.go:31] will retry after 502.319125ms: waiting for machine to come up
	I1002 10:50:36.498371  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:36.498839  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:36.498872  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:36.498789  348486 retry.go:31] will retry after 663.128247ms: waiting for machine to come up
	I1002 10:50:37.163498  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:37.164088  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:37.164117  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:37.164035  348486 retry.go:31] will retry after 823.664457ms: waiting for machine to come up
	I1002 10:50:37.989145  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:37.989517  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:37.989558  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:37.989459  348486 retry.go:31] will retry after 997.725646ms: waiting for machine to come up
	I1002 10:50:38.988838  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:38.989255  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:38.989289  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:38.989209  348486 retry.go:31] will retry after 930.528851ms: waiting for machine to come up
	I1002 10:50:39.921318  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:39.921685  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:39.921712  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:39.921636  348486 retry.go:31] will retry after 1.293908843s: waiting for machine to come up
	I1002 10:50:41.216656  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:41.217037  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:41.217071  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:41.216986  348486 retry.go:31] will retry after 1.508551204s: waiting for machine to come up
	I1002 10:50:42.727678  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:42.728171  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:42.728196  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:42.728135  348486 retry.go:31] will retry after 2.129723515s: waiting for machine to come up
	I1002 10:50:44.859807  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:44.860354  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:44.860388  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:44.860295  348486 retry.go:31] will retry after 2.388625462s: waiting for machine to come up
	I1002 10:50:47.251347  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:47.251801  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:47.251827  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:47.251750  348486 retry.go:31] will retry after 3.728366777s: waiting for machine to come up
	I1002 10:50:50.983086  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:50.983502  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find current IP address of domain ingress-addon-legacy-982656 in network mk-ingress-addon-legacy-982656
	I1002 10:50:50.983536  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | I1002 10:50:50.983454  348486 retry.go:31] will retry after 5.560057397s: waiting for machine to come up
	I1002 10:50:56.547273  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.547725  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has current primary IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.547755  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Found IP for machine: 192.168.39.231
	I1002 10:50:56.547778  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Reserving static IP address...
	I1002 10:50:56.548212  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-982656", mac: "52:54:00:f7:55:a2", ip: "192.168.39.231"} in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.619757  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Getting to WaitForSSH function...
	I1002 10:50:56.619797  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Reserved static IP address: 192.168.39.231
	I1002 10:50:56.619813  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Waiting for SSH to be available...
	I1002 10:50:56.622079  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.622536  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:56.622574  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.622728  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Using SSH client type: external
	I1002 10:50:56.622774  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa (-rw-------)
	I1002 10:50:56.622822  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 10:50:56.622842  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | About to run SSH command:
	I1002 10:50:56.622861  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | exit 0
	I1002 10:50:56.714031  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | SSH cmd err, output: <nil>: 
	I1002 10:50:56.714289  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) KVM machine creation complete!
	I1002 10:50:56.714606  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetConfigRaw
	I1002 10:50:56.715104  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:56.715277  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:56.715409  348427 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 10:50:56.715421  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetState
	I1002 10:50:56.716658  348427 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 10:50:56.716674  348427 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 10:50:56.716688  348427 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 10:50:56.716703  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:56.718679  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.718995  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:56.719030  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.719135  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:56.719270  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.719423  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.719536  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:56.719664  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:56.719994  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:56.720007  348427 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 10:50:56.842032  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:50:56.842061  348427 main.go:141] libmachine: Detecting the provisioner...
	I1002 10:50:56.842071  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:56.844717  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.845045  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:56.845073  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.845267  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:56.845468  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.845640  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.845789  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:56.846037  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:56.846383  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:56.846404  348427 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 10:50:56.967129  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 10:50:56.967256  348427 main.go:141] libmachine: found compatible host: buildroot
	I1002 10:50:56.967274  348427 main.go:141] libmachine: Provisioning with buildroot...
	I1002 10:50:56.967289  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetMachineName
	I1002 10:50:56.967569  348427 buildroot.go:166] provisioning hostname "ingress-addon-legacy-982656"
	I1002 10:50:56.967598  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetMachineName
	I1002 10:50:56.967747  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:56.970271  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.970705  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:56.970745  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:56.970878  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:56.971096  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.971269  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:56.971414  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:56.971646  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:56.971953  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:56.971966  348427 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-982656 && echo "ingress-addon-legacy-982656" | sudo tee /etc/hostname
	I1002 10:50:57.103221  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-982656
	
	I1002 10:50:57.103255  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:57.105857  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.106124  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.106157  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.106274  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:57.106500  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.106698  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.106874  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:57.107040  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:57.107495  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:57.107528  348427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-982656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-982656/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-982656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 10:50:57.234768  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 10:50:57.234838  348427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 10:50:57.234863  348427 buildroot.go:174] setting up certificates
	I1002 10:50:57.234879  348427 provision.go:83] configureAuth start
	I1002 10:50:57.234891  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetMachineName
	I1002 10:50:57.235198  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetIP
	I1002 10:50:57.237780  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.238105  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.238156  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.238273  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:57.240409  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.240699  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.240741  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.240900  348427 provision.go:138] copyHostCerts
	I1002 10:50:57.240935  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 10:50:57.240974  348427 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 10:50:57.240984  348427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 10:50:57.241047  348427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 10:50:57.241120  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 10:50:57.241138  348427 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 10:50:57.241144  348427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 10:50:57.241167  348427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 10:50:57.241214  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 10:50:57.241230  348427 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 10:50:57.241236  348427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 10:50:57.241258  348427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 10:50:57.241301  348427 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-982656 san=[192.168.39.231 192.168.39.231 localhost 127.0.0.1 minikube ingress-addon-legacy-982656]
	I1002 10:50:57.478960  348427 provision.go:172] copyRemoteCerts
	I1002 10:50:57.479020  348427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 10:50:57.479052  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:57.481748  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.482062  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.482102  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.482220  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:57.482436  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.482625  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:57.482730  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:50:57.571388  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 10:50:57.571471  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 10:50:57.593590  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 10:50:57.593656  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1002 10:50:57.615972  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 10:50:57.616064  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 10:50:57.638784  348427 provision.go:86] duration metric: configureAuth took 403.884069ms
	I1002 10:50:57.638817  348427 buildroot.go:189] setting minikube options for container-runtime
	I1002 10:50:57.638998  348427 config.go:182] Loaded profile config "ingress-addon-legacy-982656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 10:50:57.639079  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:57.641560  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.641914  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.641957  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.642108  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:57.642317  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.642515  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.642653  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:57.642815  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:57.643142  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:57.643159  348427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 10:50:57.953532  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 10:50:57.953566  348427 main.go:141] libmachine: Checking connection to Docker...
	I1002 10:50:57.953581  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetURL
	I1002 10:50:57.954955  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Using libvirt version 6000000
	I1002 10:50:57.957142  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.957480  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.957507  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.957714  348427 main.go:141] libmachine: Docker is up and running!
	I1002 10:50:57.957731  348427 main.go:141] libmachine: Reticulating splines...
	I1002 10:50:57.957738  348427 client.go:171] LocalClient.Create took 24.618413597s
	I1002 10:50:57.957761  348427 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-982656" took 24.618467339s
	I1002 10:50:57.957774  348427 start.go:300] post-start starting for "ingress-addon-legacy-982656" (driver="kvm2")
	I1002 10:50:57.957785  348427 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 10:50:57.957806  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:57.958091  348427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 10:50:57.958124  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:57.960586  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.960869  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:57.960899  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:57.961051  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:57.961219  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:57.961415  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:57.961596  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:50:58.052675  348427 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 10:50:58.057121  348427 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 10:50:58.057151  348427 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 10:50:58.057277  348427 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 10:50:58.057363  348427 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 10:50:58.057376  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 10:50:58.057465  348427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 10:50:58.066764  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 10:50:58.088527  348427 start.go:303] post-start completed in 130.736333ms
	I1002 10:50:58.088581  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetConfigRaw
	I1002 10:50:58.089239  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetIP
	I1002 10:50:58.091604  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.091972  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:58.092009  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.092266  348427 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/config.json ...
	I1002 10:50:58.092463  348427 start.go:128] duration metric: createHost completed in 24.771652769s
	I1002 10:50:58.092492  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:58.094554  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.094786  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:58.094806  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.094929  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:58.095124  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:58.095344  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:58.095478  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:58.095678  348427 main.go:141] libmachine: Using SSH client type: native
	I1002 10:50:58.095973  348427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1002 10:50:58.095984  348427 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 10:50:58.215120  348427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696243858.196255175
	
	I1002 10:50:58.215144  348427 fix.go:206] guest clock: 1696243858.196255175
	I1002 10:50:58.215154  348427 fix.go:219] Guest: 2023-10-02 10:50:58.196255175 +0000 UTC Remote: 2023-10-02 10:50:58.092475999 +0000 UTC m=+41.133070458 (delta=103.779176ms)
	I1002 10:50:58.215181  348427 fix.go:190] guest clock delta is within tolerance: 103.779176ms
	I1002 10:50:58.215203  348427 start.go:83] releasing machines lock for "ingress-addon-legacy-982656", held for 24.894490441s
	I1002 10:50:58.215236  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:58.215575  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetIP
	I1002 10:50:58.217639  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.217954  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:58.217991  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.218108  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:58.218644  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:58.218844  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:50:58.218923  348427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 10:50:58.218979  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:58.219086  348427 ssh_runner.go:195] Run: cat /version.json
	I1002 10:50:58.219115  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:50:58.221571  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.221602  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.221922  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:58.221957  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.221989  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:58.222018  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:58.222055  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:58.222237  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:58.222262  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:50:58.222399  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:58.222459  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:50:58.222542  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:50:58.222601  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:50:58.222734  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:50:58.307922  348427 ssh_runner.go:195] Run: systemctl --version
	I1002 10:50:58.328086  348427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 10:50:58.486214  348427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 10:50:58.492525  348427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 10:50:58.492625  348427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 10:50:58.508942  348427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 10:50:58.508980  348427 start.go:469] detecting cgroup driver to use...
	I1002 10:50:58.509050  348427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 10:50:58.521984  348427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 10:50:58.534828  348427 docker.go:197] disabling cri-docker service (if available) ...
	I1002 10:50:58.534890  348427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 10:50:58.548349  348427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 10:50:58.561774  348427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 10:50:58.663516  348427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 10:50:58.778827  348427 docker.go:213] disabling docker service ...
	I1002 10:50:58.779002  348427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 10:50:58.791306  348427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 10:50:58.802032  348427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 10:50:58.901879  348427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 10:50:59.001180  348427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 10:50:59.013725  348427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 10:50:59.030576  348427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 10:50:59.030636  348427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:50:59.039405  348427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 10:50:59.039472  348427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:50:59.048458  348427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:50:59.057018  348427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 10:50:59.065405  348427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 10:50:59.075693  348427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 10:50:59.083495  348427 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 10:50:59.083550  348427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 10:50:59.096664  348427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 10:50:59.104902  348427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 10:50:59.201595  348427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 10:50:59.362965  348427 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 10:50:59.363062  348427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 10:50:59.367574  348427 start.go:537] Will wait 60s for crictl version
	I1002 10:50:59.367633  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:50:59.371254  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 10:50:59.411399  348427 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 10:50:59.411497  348427 ssh_runner.go:195] Run: crio --version
	I1002 10:50:59.456712  348427 ssh_runner.go:195] Run: crio --version
	I1002 10:50:59.504400  348427 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1002 10:50:59.505957  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetIP
	I1002 10:50:59.508325  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:59.508597  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:50:59.508635  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:50:59.508797  348427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 10:50:59.512898  348427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:50:59.525421  348427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 10:50:59.525475  348427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 10:50:59.556294  348427 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 10:50:59.556359  348427 ssh_runner.go:195] Run: which lz4
	I1002 10:50:59.560030  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1002 10:50:59.560127  348427 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 10:50:59.564007  348427 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 10:50:59.564041  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1002 10:51:01.273750  348427 crio.go:444] Took 1.713648 seconds to copy over tarball
	I1002 10:51:01.273848  348427 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 10:51:04.206704  348427 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932818218s)
	I1002 10:51:04.206733  348427 crio.go:451] Took 2.932951 seconds to extract the tarball
	I1002 10:51:04.206742  348427 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 10:51:04.248419  348427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 10:51:04.305477  348427 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 10:51:04.305504  348427 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 10:51:04.305550  348427 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:51:04.305610  348427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:51:04.305636  348427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:51:04.305650  348427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:51:04.305828  348427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:51:04.305848  348427 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 10:51:04.305887  348427 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 10:51:04.305930  348427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:51:04.307238  348427 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 10:51:04.307232  348427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:51:04.307265  348427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:51:04.307270  348427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:51:04.307273  348427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1002 10:51:04.307269  348427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:51:04.307299  348427 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:51:04.307300  348427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:51:04.463144  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:51:04.465244  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1002 10:51:04.466453  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1002 10:51:04.468077  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:51:04.470465  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:51:04.473617  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:51:04.551280  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1002 10:51:04.568917  348427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1002 10:51:04.568991  348427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:51:04.569049  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.628083  348427 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1002 10:51:04.628153  348427 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 10:51:04.628200  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.637504  348427 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1002 10:51:04.637553  348427 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 10:51:04.637604  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.649005  348427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1002 10:51:04.649026  348427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1002 10:51:04.649053  348427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:51:04.649056  348427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:51:04.649097  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.649097  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.649257  348427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1002 10:51:04.649299  348427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:51:04.649341  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.667556  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 10:51:04.667586  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1002 10:51:04.667597  348427 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1002 10:51:04.667625  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1002 10:51:04.667643  348427 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1002 10:51:04.667659  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 10:51:04.667683  348427 ssh_runner.go:195] Run: which crictl
	I1002 10:51:04.667710  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 10:51:04.667735  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 10:51:04.823290  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1002 10:51:04.823329  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1002 10:51:04.823385  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1002 10:51:04.823458  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1002 10:51:04.824271  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1002 10:51:04.824318  348427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 10:51:04.824330  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1002 10:51:04.858349  348427 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1002 10:51:05.266988  348427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:51:05.411273  348427 cache_images.go:92] LoadImages completed in 1.10575151s
	W1002 10:51:05.411399  348427 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1002 10:51:05.411489  348427 ssh_runner.go:195] Run: crio config
	I1002 10:51:05.470017  348427 cni.go:84] Creating CNI manager for ""
	I1002 10:51:05.470046  348427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:51:05.470074  348427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 10:51:05.470099  348427 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-982656 NodeName:ingress-addon-legacy-982656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 10:51:05.470272  348427 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-982656"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 10:51:05.470394  348427 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-982656 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-982656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 10:51:05.470469  348427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 10:51:05.479990  348427 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 10:51:05.480067  348427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 10:51:05.488789  348427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1002 10:51:05.504340  348427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 10:51:05.520131  348427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1002 10:51:05.535506  348427 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1002 10:51:05.538944  348427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 10:51:05.550322  348427 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656 for IP: 192.168.39.231
	I1002 10:51:05.550363  348427 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:05.550508  348427 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 10:51:05.550552  348427 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 10:51:05.550596  348427 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key
	I1002 10:51:05.550609  348427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt with IP's: []
	I1002 10:51:05.749397  348427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt ...
	I1002 10:51:05.749427  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: {Name:mkce1f0def7e94408e703aca9bcaae20e6496cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:05.749594  348427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key ...
	I1002 10:51:05.749606  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key: {Name:mkfb789de4e365791de2af5c6a0d5c4d4fe1baa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:05.749682  348427 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key.cabadef2
	I1002 10:51:05.749701  348427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt.cabadef2 with IP's: [192.168.39.231 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 10:51:05.976846  348427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt.cabadef2 ...
	I1002 10:51:05.976882  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt.cabadef2: {Name:mkf494f69d38e93efc57afa7c1eaf857ea44b44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:05.977035  348427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key.cabadef2 ...
	I1002 10:51:05.977046  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key.cabadef2: {Name:mk93f85d6d2f2c102bee0ee8376efe67aace4ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:05.977115  348427 certs.go:337] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt.cabadef2 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt
	I1002 10:51:05.977180  348427 certs.go:341] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key.cabadef2 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key
	I1002 10:51:05.977241  348427 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.key
	I1002 10:51:05.977251  348427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.crt with IP's: []
	I1002 10:51:06.187820  348427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.crt ...
	I1002 10:51:06.187854  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.crt: {Name:mk76b0b30bc898dff0157ac685144bea17957ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:06.188020  348427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.key ...
	I1002 10:51:06.188035  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.key: {Name:mk2b8c33b85b4c97e920d6d5de992a9d7137c68e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:06.188113  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 10:51:06.188132  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 10:51:06.188148  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 10:51:06.188161  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 10:51:06.188173  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 10:51:06.188188  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 10:51:06.188200  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 10:51:06.188212  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 10:51:06.188266  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 10:51:06.188299  348427 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 10:51:06.188309  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 10:51:06.188334  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 10:51:06.188358  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 10:51:06.188387  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 10:51:06.188425  348427 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 10:51:06.188460  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:51:06.188476  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 10:51:06.188487  348427 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 10:51:06.189149  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 10:51:06.212144  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 10:51:06.235745  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 10:51:06.257388  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 10:51:06.279100  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 10:51:06.300828  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 10:51:06.323021  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 10:51:06.345019  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 10:51:06.366968  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 10:51:06.388007  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 10:51:06.410085  348427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 10:51:06.431418  348427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 10:51:06.447631  348427 ssh_runner.go:195] Run: openssl version
	I1002 10:51:06.453296  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 10:51:06.463334  348427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:51:06.468062  348427 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:51:06.468112  348427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 10:51:06.473372  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 10:51:06.483962  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 10:51:06.494483  348427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 10:51:06.499007  348427 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 10:51:06.499068  348427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 10:51:06.504826  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 10:51:06.514919  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 10:51:06.525011  348427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 10:51:06.529357  348427 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 10:51:06.529405  348427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 10:51:06.534849  348427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 10:51:06.544905  348427 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 10:51:06.548809  348427 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 10:51:06.548874  348427 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-982656 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-982656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:51:06.548947  348427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 10:51:06.548987  348427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 10:51:06.590096  348427 cri.go:89] found id: ""
	I1002 10:51:06.590192  348427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 10:51:06.600102  348427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 10:51:06.609195  348427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 10:51:06.618514  348427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 10:51:06.618562  348427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 10:51:06.672202  348427 kubeadm.go:322] W1002 10:51:06.664572     959 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 10:51:06.800716  348427 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 10:51:09.410186  348427 kubeadm.go:322] W1002 10:51:09.404458     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 10:51:09.411557  348427 kubeadm.go:322] W1002 10:51:09.405785     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 10:51:20.403934  348427 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 10:51:20.404011  348427 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 10:51:20.404101  348427 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 10:51:20.404210  348427 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 10:51:20.404305  348427 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 10:51:20.404433  348427 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 10:51:20.404563  348427 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 10:51:20.404628  348427 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 10:51:20.404714  348427 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 10:51:20.406316  348427 out.go:204]   - Generating certificates and keys ...
	I1002 10:51:20.406416  348427 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 10:51:20.406512  348427 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 10:51:20.406624  348427 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 10:51:20.406693  348427 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 10:51:20.406784  348427 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 10:51:20.406856  348427 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 10:51:20.406919  348427 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 10:51:20.407078  348427 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-982656 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1002 10:51:20.407160  348427 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 10:51:20.407347  348427 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-982656 localhost] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1002 10:51:20.407442  348427 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 10:51:20.407531  348427 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 10:51:20.407595  348427 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 10:51:20.407667  348427 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 10:51:20.407745  348427 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 10:51:20.407824  348427 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 10:51:20.407915  348427 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 10:51:20.407994  348427 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 10:51:20.408083  348427 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 10:51:20.409561  348427 out.go:204]   - Booting up control plane ...
	I1002 10:51:20.409646  348427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 10:51:20.409724  348427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 10:51:20.409812  348427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 10:51:20.409912  348427 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 10:51:20.410067  348427 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 10:51:20.410185  348427 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503746 seconds
	I1002 10:51:20.410308  348427 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 10:51:20.410491  348427 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 10:51:20.410577  348427 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 10:51:20.410744  348427 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-982656 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 10:51:20.410828  348427 kubeadm.go:322] [bootstrap-token] Using token: 70304c.6wjals5r857ptrni
	I1002 10:51:20.412352  348427 out.go:204]   - Configuring RBAC rules ...
	I1002 10:51:20.412490  348427 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 10:51:20.412603  348427 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 10:51:20.412770  348427 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 10:51:20.412934  348427 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 10:51:20.413098  348427 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 10:51:20.413218  348427 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 10:51:20.413354  348427 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 10:51:20.413423  348427 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 10:51:20.413493  348427 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 10:51:20.413503  348427 kubeadm.go:322] 
	I1002 10:51:20.413583  348427 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 10:51:20.413592  348427 kubeadm.go:322] 
	I1002 10:51:20.413709  348427 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 10:51:20.413729  348427 kubeadm.go:322] 
	I1002 10:51:20.413782  348427 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 10:51:20.413858  348427 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 10:51:20.413919  348427 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 10:51:20.413929  348427 kubeadm.go:322] 
	I1002 10:51:20.413986  348427 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 10:51:20.414117  348427 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 10:51:20.414206  348427 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 10:51:20.414222  348427 kubeadm.go:322] 
	I1002 10:51:20.414322  348427 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 10:51:20.414424  348427 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 10:51:20.414432  348427 kubeadm.go:322] 
	I1002 10:51:20.414531  348427 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 70304c.6wjals5r857ptrni \
	I1002 10:51:20.414663  348427 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 10:51:20.414697  348427 kubeadm.go:322]     --control-plane 
	I1002 10:51:20.414707  348427 kubeadm.go:322] 
	I1002 10:51:20.414826  348427 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 10:51:20.414851  348427 kubeadm.go:322] 
	I1002 10:51:20.414956  348427 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 70304c.6wjals5r857ptrni \
	I1002 10:51:20.415094  348427 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 10:51:20.415120  348427 cni.go:84] Creating CNI manager for ""
	I1002 10:51:20.415130  348427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:51:20.416860  348427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 10:51:20.418277  348427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 10:51:20.427857  348427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 10:51:20.460639  348427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 10:51:20.460783  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:20.460784  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=ingress-addon-legacy-982656 minikube.k8s.io/updated_at=2023_10_02T10_51_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:20.480559  348427 ops.go:34] apiserver oom_adj: -16
	I1002 10:51:20.641386  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:20.819214  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:21.482538  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:21.982788  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:22.482206  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:22.982336  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:23.482946  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:23.982608  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:24.482866  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:24.982478  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:25.482650  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:25.981919  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:26.482462  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:26.982832  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:27.482910  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:27.982769  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:28.482749  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:28.982775  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:29.482737  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:29.982154  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:30.482385  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:30.982018  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:31.482074  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:31.982161  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:32.482309  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:32.981948  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:33.482476  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:33.982125  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:34.482286  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:34.982235  348427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 10:51:35.119841  348427 kubeadm.go:1081] duration metric: took 14.659134616s to wait for elevateKubeSystemPrivileges.
	I1002 10:51:35.119889  348427 kubeadm.go:406] StartCluster complete in 28.571023708s
	I1002 10:51:35.119922  348427 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:35.120059  348427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:51:35.120889  348427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:51:35.121099  348427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 10:51:35.121336  348427 config.go:182] Loaded profile config "ingress-addon-legacy-982656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 10:51:35.121269  348427 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 10:51:35.121410  348427 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-982656"
	I1002 10:51:35.121437  348427 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-982656"
	I1002 10:51:35.121449  348427 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-982656"
	I1002 10:51:35.121472  348427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-982656"
	I1002 10:51:35.121492  348427 host.go:66] Checking if "ingress-addon-legacy-982656" exists ...
	I1002 10:51:35.121733  348427 kapi.go:59] client config for ingress-addon-legacy-982656: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:51:35.121971  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:51:35.121991  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:51:35.122013  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:51:35.122020  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:51:35.122619  348427 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 10:51:35.137530  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1002 10:51:35.137611  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I1002 10:51:35.137929  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:51:35.138039  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:51:35.138425  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:51:35.138540  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:51:35.138564  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:51:35.138591  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:51:35.138924  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:51:35.138967  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:51:35.139146  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetState
	I1002 10:51:35.139436  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:51:35.139466  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:51:35.141788  348427 kapi.go:59] client config for ingress-addon-legacy-982656: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:51:35.142159  348427 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-982656"
	I1002 10:51:35.142204  348427 host.go:66] Checking if "ingress-addon-legacy-982656" exists ...
	I1002 10:51:35.142670  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:51:35.142707  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:51:35.152296  348427 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-982656" context rescaled to 1 replicas
	I1002 10:51:35.152342  348427 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 10:51:35.155028  348427 out.go:177] * Verifying Kubernetes components...
	I1002 10:51:35.156402  348427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:51:35.155520  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I1002 10:51:35.156979  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:51:35.157286  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I1002 10:51:35.157537  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:51:35.157564  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:51:35.157650  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:51:35.157925  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:51:35.158139  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:51:35.158158  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:51:35.158173  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetState
	I1002 10:51:35.158510  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:51:35.159022  348427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:51:35.159056  348427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:51:35.159794  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:51:35.161686  348427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 10:51:35.162818  348427 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:51:35.162835  348427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 10:51:35.162850  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:51:35.165821  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:51:35.166265  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:51:35.166304  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:51:35.166454  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:51:35.166628  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:51:35.166842  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:51:35.166963  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:51:35.174474  348427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1002 10:51:35.174901  348427 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:51:35.175362  348427 main.go:141] libmachine: Using API Version  1
	I1002 10:51:35.175390  348427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:51:35.175708  348427 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:51:35.175915  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetState
	I1002 10:51:35.177457  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .DriverName
	I1002 10:51:35.177709  348427 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 10:51:35.177724  348427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 10:51:35.177739  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHHostname
	I1002 10:51:35.180255  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:51:35.180668  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:55:a2", ip: ""} in network mk-ingress-addon-legacy-982656: {Iface:virbr1 ExpiryTime:2023-10-02 11:50:49 +0000 UTC Type:0 Mac:52:54:00:f7:55:a2 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ingress-addon-legacy-982656 Clientid:01:52:54:00:f7:55:a2}
	I1002 10:51:35.180699  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | domain ingress-addon-legacy-982656 has defined IP address 192.168.39.231 and MAC address 52:54:00:f7:55:a2 in network mk-ingress-addon-legacy-982656
	I1002 10:51:35.180817  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHPort
	I1002 10:51:35.180997  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHKeyPath
	I1002 10:51:35.181124  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .GetSSHUsername
	I1002 10:51:35.181253  348427 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/ingress-addon-legacy-982656/id_rsa Username:docker}
	I1002 10:51:35.291480  348427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 10:51:35.292023  348427 kapi.go:59] client config for ingress-addon-legacy-982656: &rest.Config{Host:"https://192.168.39.231:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 10:51:35.292322  348427 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-982656" to be "Ready" ...
	I1002 10:51:35.295556  348427 node_ready.go:49] node "ingress-addon-legacy-982656" has status "Ready":"True"
	I1002 10:51:35.295596  348427 node_ready.go:38] duration metric: took 3.2398ms waiting for node "ingress-addon-legacy-982656" to be "Ready" ...
	I1002 10:51:35.295618  348427 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:51:35.302301  348427 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.310936  348427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 10:51:35.312542  348427 pod_ready.go:92] pod "etcd-ingress-addon-legacy-982656" in "kube-system" namespace has status "Ready":"True"
	I1002 10:51:35.312567  348427 pod_ready.go:81] duration metric: took 10.236872ms waiting for pod "etcd-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.312581  348427 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.320129  348427 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-982656" in "kube-system" namespace has status "Ready":"True"
	I1002 10:51:35.320158  348427 pod_ready.go:81] duration metric: took 7.568729ms waiting for pod "kube-apiserver-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.320171  348427 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.325145  348427 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-982656" in "kube-system" namespace has status "Ready":"True"
	I1002 10:51:35.325173  348427 pod_ready.go:81] duration metric: took 4.993684ms waiting for pod "kube-controller-manager-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.325187  348427 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.329733  348427 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-982656" in "kube-system" namespace has status "Ready":"True"
	I1002 10:51:35.329758  348427 pod_ready.go:81] duration metric: took 4.560932ms waiting for pod "kube-scheduler-ingress-addon-legacy-982656" in "kube-system" namespace to be "Ready" ...
	I1002 10:51:35.329769  348427 pod_ready.go:38] duration metric: took 34.127028ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 10:51:35.329794  348427 api_server.go:52] waiting for apiserver process to appear ...
	I1002 10:51:35.329859  348427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 10:51:35.374515  348427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 10:51:36.035832  348427 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 10:51:36.160855  348427 main.go:141] libmachine: Making call to close driver server
	I1002 10:51:36.160891  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Close
	I1002 10:51:36.160929  348427 main.go:141] libmachine: Making call to close driver server
	I1002 10:51:36.160951  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Close
	I1002 10:51:36.160867  348427 api_server.go:72] duration metric: took 1.008487934s to wait for apiserver process to appear ...
	I1002 10:51:36.160974  348427 api_server.go:88] waiting for apiserver healthz status ...
	I1002 10:51:36.161020  348427 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1002 10:51:36.161254  348427 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:51:36.161277  348427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:51:36.161283  348427 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:51:36.161299  348427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:51:36.161309  348427 main.go:141] libmachine: Making call to close driver server
	I1002 10:51:36.161319  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Close
	I1002 10:51:36.161289  348427 main.go:141] libmachine: Making call to close driver server
	I1002 10:51:36.161346  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Close
	I1002 10:51:36.161529  348427 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:51:36.161545  348427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:51:36.161710  348427 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:51:36.161725  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Closing plugin on server side
	I1002 10:51:36.161727  348427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:51:36.173968  348427 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1002 10:51:36.175658  348427 api_server.go:141] control plane version: v1.18.20
	I1002 10:51:36.175682  348427 api_server.go:131] duration metric: took 14.698475ms to wait for apiserver health ...
	I1002 10:51:36.175692  348427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 10:51:36.187068  348427 system_pods.go:59] 7 kube-system pods found
	I1002 10:51:36.187097  348427 system_pods.go:61] "coredns-66bff467f8-jlntv" [ba7ef520-293c-4034-8401-adedc8094092] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:51:36.187106  348427 system_pods.go:61] "etcd-ingress-addon-legacy-982656" [de44b63a-4bd5-4e61-9bee-bdfec59e5945] Running
	I1002 10:51:36.187110  348427 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-982656" [043951b2-c2bf-47c4-bb08-2d65389fabe8] Running
	I1002 10:51:36.187115  348427 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-982656" [7914f34c-a7a2-42d3-8285-d112de508025] Running
	I1002 10:51:36.187123  348427 system_pods.go:61] "kube-proxy-6sfqr" [1d25b02e-a589-4b0f-9ad6-715370b99993] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:51:36.187128  348427 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-982656" [756c8240-5d2e-43a6-8d54-31943b5581ec] Running
	I1002 10:51:36.187139  348427 system_pods.go:61] "storage-provisioner" [f086f92d-617a-43d7-8b43-6733594b525f] Pending
	I1002 10:51:36.187147  348427 system_pods.go:74] duration metric: took 11.447911ms to wait for pod list to return data ...
	I1002 10:51:36.187159  348427 default_sa.go:34] waiting for default service account to be created ...
	I1002 10:51:36.188761  348427 main.go:141] libmachine: Making call to close driver server
	I1002 10:51:36.188780  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) Calling .Close
	I1002 10:51:36.189035  348427 main.go:141] libmachine: Successfully made call to close driver server
	I1002 10:51:36.189066  348427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 10:51:36.189082  348427 main.go:141] libmachine: (ingress-addon-legacy-982656) DBG | Closing plugin on server side
	I1002 10:51:36.190908  348427 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 10:51:36.192192  348427 addons.go:502] enable addons completed in 1.070925078s: enabled=[storage-provisioner default-storageclass]
	I1002 10:51:36.194822  348427 default_sa.go:45] found service account: "default"
	I1002 10:51:36.194845  348427 default_sa.go:55] duration metric: took 7.679478ms for default service account to be created ...
	I1002 10:51:36.194860  348427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 10:51:36.208803  348427 system_pods.go:86] 7 kube-system pods found
	I1002 10:51:36.208834  348427 system_pods.go:89] "coredns-66bff467f8-jlntv" [ba7ef520-293c-4034-8401-adedc8094092] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:51:36.208842  348427 system_pods.go:89] "etcd-ingress-addon-legacy-982656" [de44b63a-4bd5-4e61-9bee-bdfec59e5945] Running
	I1002 10:51:36.208848  348427 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-982656" [043951b2-c2bf-47c4-bb08-2d65389fabe8] Running
	I1002 10:51:36.208852  348427 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-982656" [7914f34c-a7a2-42d3-8285-d112de508025] Running
	I1002 10:51:36.208858  348427 system_pods.go:89] "kube-proxy-6sfqr" [1d25b02e-a589-4b0f-9ad6-715370b99993] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:51:36.208862  348427 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-982656" [756c8240-5d2e-43a6-8d54-31943b5581ec] Running
	I1002 10:51:36.208866  348427 system_pods.go:89] "storage-provisioner" [f086f92d-617a-43d7-8b43-6733594b525f] Pending
	I1002 10:51:36.208889  348427 retry.go:31] will retry after 208.621844ms: missing components: kube-dns, kube-proxy
	I1002 10:51:36.426073  348427 system_pods.go:86] 7 kube-system pods found
	I1002 10:51:36.426104  348427 system_pods.go:89] "coredns-66bff467f8-jlntv" [ba7ef520-293c-4034-8401-adedc8094092] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:51:36.426113  348427 system_pods.go:89] "etcd-ingress-addon-legacy-982656" [de44b63a-4bd5-4e61-9bee-bdfec59e5945] Running
	I1002 10:51:36.426119  348427 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-982656" [043951b2-c2bf-47c4-bb08-2d65389fabe8] Running
	I1002 10:51:36.426123  348427 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-982656" [7914f34c-a7a2-42d3-8285-d112de508025] Running
	I1002 10:51:36.426128  348427 system_pods.go:89] "kube-proxy-6sfqr" [1d25b02e-a589-4b0f-9ad6-715370b99993] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:51:36.426138  348427 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-982656" [756c8240-5d2e-43a6-8d54-31943b5581ec] Running
	I1002 10:51:36.426149  348427 system_pods.go:89] "storage-provisioner" [f086f92d-617a-43d7-8b43-6733594b525f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:51:36.426172  348427 retry.go:31] will retry after 249.548748ms: missing components: kube-dns, kube-proxy
	I1002 10:51:36.709658  348427 system_pods.go:86] 7 kube-system pods found
	I1002 10:51:36.709690  348427 system_pods.go:89] "coredns-66bff467f8-jlntv" [ba7ef520-293c-4034-8401-adedc8094092] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:51:36.709698  348427 system_pods.go:89] "etcd-ingress-addon-legacy-982656" [de44b63a-4bd5-4e61-9bee-bdfec59e5945] Running
	I1002 10:51:36.709703  348427 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-982656" [043951b2-c2bf-47c4-bb08-2d65389fabe8] Running
	I1002 10:51:36.709708  348427 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-982656" [7914f34c-a7a2-42d3-8285-d112de508025] Running
	I1002 10:51:36.709713  348427 system_pods.go:89] "kube-proxy-6sfqr" [1d25b02e-a589-4b0f-9ad6-715370b99993] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 10:51:36.709719  348427 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-982656" [756c8240-5d2e-43a6-8d54-31943b5581ec] Running
	I1002 10:51:36.709729  348427 system_pods.go:89] "storage-provisioner" [f086f92d-617a-43d7-8b43-6733594b525f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:51:36.709749  348427 retry.go:31] will retry after 383.425685ms: missing components: kube-dns, kube-proxy
	I1002 10:51:37.100964  348427 system_pods.go:86] 7 kube-system pods found
	I1002 10:51:37.100997  348427 system_pods.go:89] "coredns-66bff467f8-jlntv" [ba7ef520-293c-4034-8401-adedc8094092] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 10:51:37.101007  348427 system_pods.go:89] "etcd-ingress-addon-legacy-982656" [de44b63a-4bd5-4e61-9bee-bdfec59e5945] Running
	I1002 10:51:37.101012  348427 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-982656" [043951b2-c2bf-47c4-bb08-2d65389fabe8] Running
	I1002 10:51:37.101016  348427 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-982656" [7914f34c-a7a2-42d3-8285-d112de508025] Running
	I1002 10:51:37.101020  348427 system_pods.go:89] "kube-proxy-6sfqr" [1d25b02e-a589-4b0f-9ad6-715370b99993] Running
	I1002 10:51:37.101024  348427 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-982656" [756c8240-5d2e-43a6-8d54-31943b5581ec] Running
	I1002 10:51:37.101028  348427 system_pods.go:89] "storage-provisioner" [f086f92d-617a-43d7-8b43-6733594b525f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 10:51:37.101037  348427 system_pods.go:126] duration metric: took 906.170255ms to wait for k8s-apps to be running ...
	I1002 10:51:37.101045  348427 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 10:51:37.101095  348427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 10:51:37.124967  348427 system_svc.go:56] duration metric: took 23.907282ms WaitForService to wait for kubelet.
	I1002 10:51:37.125015  348427 kubeadm.go:581] duration metric: took 1.972624132s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 10:51:37.125047  348427 node_conditions.go:102] verifying NodePressure condition ...
	I1002 10:51:37.131512  348427 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 10:51:37.131549  348427 node_conditions.go:123] node cpu capacity is 2
	I1002 10:51:37.131593  348427 node_conditions.go:105] duration metric: took 6.537132ms to run NodePressure ...
	I1002 10:51:37.131609  348427 start.go:228] waiting for startup goroutines ...
	I1002 10:51:37.131619  348427 start.go:233] waiting for cluster config update ...
	I1002 10:51:37.131637  348427 start.go:242] writing updated cluster config ...
	I1002 10:51:37.131933  348427 ssh_runner.go:195] Run: rm -f paused
	I1002 10:51:37.197846  348427 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1002 10:51:37.199645  348427 out.go:177] 
	W1002 10:51:37.201087  348427 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1002 10:51:37.202541  348427 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1002 10:51:37.204165  348427 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-982656" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 10:50:45 UTC, ends at Mon 2023-10-02 10:54:47 UTC. --
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.657831595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244087657817826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=be915858-4c58-43f3-bb8f-4d664308cd2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.658636512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=564f4d5b-9277-4476-990f-4941cf04f029 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.658682165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=564f4d5b-9277-4476-990f-4941cf04f029 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.658982821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4097d40f86bb8cb562e0af8fd923ec6bd0b2bb66824551ddd5917f150faaa074,PodSandboxId:eec4d02321858288b4a582e31f1ecc713df429d67fe7c2cf412e5a58ceb3ccce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696244080481135671,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-8f2h4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d42bfdc8-18c2-45f8-a4de-7a140fbfdac2,},Annotations:map[string]string{io.kubernetes.container.hash: af00d7b7,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454e2d729095e272664e5c18348523941b97b60f0123da428805def8ae50af4,PodSandboxId:4da9cc671c85a9162cfae0eadede40671b49e60380ebdadc3d8c8e04c289457d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243938438004628,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2ab6e312-e751-4154-896a-2c3458adbf4a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7e4c3d53,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3066e6a3fd9d9151a7a51ce0e5b76ddb3785c626fb6adb03936d74b15d21a88e,PodSandboxId:65c05501e4da9f3347814211b94742bfc11dd1ad08474ed103cb11eee60c5e6a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696243913322018969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hql58,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f,},Annotations:map[string]string{io.kubernetes.container.hash: b746a4f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e84d97e03c210f71efb9ebe74568c2e6fb709e9e86023642d9e3dfcb7a5f3a95,PodSandboxId:70113951e9a92d6d3e2ee162f4a5c733cf40c337fecbc7c3a68a70771ebed4f0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243903870662656,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-297mg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f31c9f-2263-4d80-95b3-99f21a3b9a02,},Annotations:map[string]string{io.kubernetes.container.hash: 2e007cf5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b9102567dc806dbf5053943443c9abe0dc554599723cb9988dcf68102d675,PodSandboxId:d808eeaee7aac705065951145643d5f098a866f28c2591f7eae4c333463ba6b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243902711487207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7v2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2349b09-e7a7-4938-8383-ea6704352ef7,},Annotations:map[string]string{io.kubernetes.container.hash: bc753a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40331f46e65bf468e25e7bd46fe9a41c7a895fbb26534e9aa7baf2d79e14bbf7,PodSandboxId:77143262afd6d5562c64fc0e3cc5a60086645da9745d846601a0be4fb6d8914f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243897103783668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f086f92d-617a-43d7-8b43-6733594b525f,},Annotations:map[string]string{io.kubernetes.container.hash: be945311,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82287d550719812d16807b4478cb9c0b152d753b8d906cb1920a7aa8cb765627,PodSandboxId:4f837d3d695648d0593ee827ae696a9a6f543d605ff2ff70d9ebedcdefb340d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696243896644879209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jlntv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba7ef520-293c-4034-8401-adedc8094092,},Annotations:map[string]string{io.kubernetes.container.hash: 9bf63382,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d7b4282ff2a26d6346dd0f8846
db0ae22e17075a868db3d2e71d771955493d,PodSandboxId:796ad17a2514cd959acf6494e035c3a7278e24c2ff7d68d002aa505574d37543,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696243896388390130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sfqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d25b02e-a589-4b0f-9ad6-715370b99993,},Annotations:map[string]string{io.kubernetes.container.hash: 3c641df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc10597bf1c35a6a9e427198633900fadc87497f0eac77894823a9cb76d6889d,Pod
SandboxId:9b33f71c896fbceafe25ac6d5e8db3add00eab32ba69501b8c65332a362ea8c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696243873131341575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7935c9b827a79ecbe5371cdd3e9be6,},Annotations:map[string]string{io.kubernetes.container.hash: bb73215e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0b9e200e953dc814ea4fea01ba6a5d74817e983d0458b3c6b899dddd03204,PodSandboxId:78d6b6505bc9165b740483ae771994037d34
987bcbdeb76bc1d0cb2977e405df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696243871722209391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977a1d8bbbe1a3f701d9d467ef3bb826363eb2ed6806b810e576427ecb9e6340,PodSandboxId:51573fe6e41c87cfd1163b006f304904bc1d70252f
84a7befa8de37e0699ac36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696243871753701052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a9c662b0294970a7e37f8f5149427f,},Annotations:map[string]string{io.kubernetes.container.hash: 77fd659,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94214bd4d923285887a148f0082389397bc4a32c92491453e88697ba57c9d8,PodSandboxId:d592ca8a646807298ddde3814fc92b1d8594a16fb1339a73c
eb2c5ac47800f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696243871575344513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=564f4d5b-9277-4476-990f-4941cf04f029 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.702650529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3064798d-ac26-4d25-9d2b-4aacdf41441e name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.702707393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3064798d-ac26-4d25-9d2b-4aacdf41441e name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.704204607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b3bb543c-3c11-4ce2-a677-2dd8c6e10c69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.704774190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244087704759797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=b3bb543c-3c11-4ce2-a677-2dd8c6e10c69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.706038071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b843d9a7-dafb-44c3-b998-49710ec9bd29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.706089237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b843d9a7-dafb-44c3-b998-49710ec9bd29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.706456950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4097d40f86bb8cb562e0af8fd923ec6bd0b2bb66824551ddd5917f150faaa074,PodSandboxId:eec4d02321858288b4a582e31f1ecc713df429d67fe7c2cf412e5a58ceb3ccce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696244080481135671,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-8f2h4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d42bfdc8-18c2-45f8-a4de-7a140fbfdac2,},Annotations:map[string]string{io.kubernetes.container.hash: af00d7b7,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454e2d729095e272664e5c18348523941b97b60f0123da428805def8ae50af4,PodSandboxId:4da9cc671c85a9162cfae0eadede40671b49e60380ebdadc3d8c8e04c289457d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243938438004628,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2ab6e312-e751-4154-896a-2c3458adbf4a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7e4c3d53,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3066e6a3fd9d9151a7a51ce0e5b76ddb3785c626fb6adb03936d74b15d21a88e,PodSandboxId:65c05501e4da9f3347814211b94742bfc11dd1ad08474ed103cb11eee60c5e6a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696243913322018969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hql58,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f,},Annotations:map[string]string{io.kubernetes.container.hash: b746a4f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e84d97e03c210f71efb9ebe74568c2e6fb709e9e86023642d9e3dfcb7a5f3a95,PodSandboxId:70113951e9a92d6d3e2ee162f4a5c733cf40c337fecbc7c3a68a70771ebed4f0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243903870662656,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-297mg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f31c9f-2263-4d80-95b3-99f21a3b9a02,},Annotations:map[string]string{io.kubernetes.container.hash: 2e007cf5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b9102567dc806dbf5053943443c9abe0dc554599723cb9988dcf68102d675,PodSandboxId:d808eeaee7aac705065951145643d5f098a866f28c2591f7eae4c333463ba6b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243902711487207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7v2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2349b09-e7a7-4938-8383-ea6704352ef7,},Annotations:map[string]string{io.kubernetes.container.hash: bc753a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40331f46e65bf468e25e7bd46fe9a41c7a895fbb26534e9aa7baf2d79e14bbf7,PodSandboxId:77143262afd6d5562c64fc0e3cc5a60086645da9745d846601a0be4fb6d8914f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243897103783668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f086f92d-617a-43d7-8b43-6733594b525f,},Annotations:map[string]string{io.kubernetes.container.hash: be945311,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82287d550719812d16807b4478cb9c0b152d753b8d906cb1920a7aa8cb765627,PodSandboxId:4f837d3d695648d0593ee827ae696a9a6f543d605ff2ff70d9ebedcdefb340d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696243896644879209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jlntv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba7ef520-293c-4034-8401-adedc8094092,},Annotations:map[string]string{io.kubernetes.container.hash: 9bf63382,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d7b4282ff2a26d6346dd0f8846
db0ae22e17075a868db3d2e71d771955493d,PodSandboxId:796ad17a2514cd959acf6494e035c3a7278e24c2ff7d68d002aa505574d37543,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696243896388390130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sfqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d25b02e-a589-4b0f-9ad6-715370b99993,},Annotations:map[string]string{io.kubernetes.container.hash: 3c641df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc10597bf1c35a6a9e427198633900fadc87497f0eac77894823a9cb76d6889d,Pod
SandboxId:9b33f71c896fbceafe25ac6d5e8db3add00eab32ba69501b8c65332a362ea8c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696243873131341575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7935c9b827a79ecbe5371cdd3e9be6,},Annotations:map[string]string{io.kubernetes.container.hash: bb73215e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0b9e200e953dc814ea4fea01ba6a5d74817e983d0458b3c6b899dddd03204,PodSandboxId:78d6b6505bc9165b740483ae771994037d34
987bcbdeb76bc1d0cb2977e405df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696243871722209391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977a1d8bbbe1a3f701d9d467ef3bb826363eb2ed6806b810e576427ecb9e6340,PodSandboxId:51573fe6e41c87cfd1163b006f304904bc1d70252f
84a7befa8de37e0699ac36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696243871753701052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a9c662b0294970a7e37f8f5149427f,},Annotations:map[string]string{io.kubernetes.container.hash: 77fd659,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94214bd4d923285887a148f0082389397bc4a32c92491453e88697ba57c9d8,PodSandboxId:d592ca8a646807298ddde3814fc92b1d8594a16fb1339a73c
eb2c5ac47800f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696243871575344513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b843d9a7-dafb-44c3-b998-49710ec9bd29 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.745669714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0c9aebca-b969-4840-9bcd-2d07f45923d0 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.745728004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0c9aebca-b969-4840-9bcd-2d07f45923d0 name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.747057440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b903379f-e11d-4740-a9cc-92950a6300e6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.747613143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244087747596908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=b903379f-e11d-4740-a9cc-92950a6300e6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.749117958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6542bc51-d557-4a6d-863c-907b002ba407 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.749167345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6542bc51-d557-4a6d-863c-907b002ba407 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.749599488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4097d40f86bb8cb562e0af8fd923ec6bd0b2bb66824551ddd5917f150faaa074,PodSandboxId:eec4d02321858288b4a582e31f1ecc713df429d67fe7c2cf412e5a58ceb3ccce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696244080481135671,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-8f2h4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d42bfdc8-18c2-45f8-a4de-7a140fbfdac2,},Annotations:map[string]string{io.kubernetes.container.hash: af00d7b7,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454e2d729095e272664e5c18348523941b97b60f0123da428805def8ae50af4,PodSandboxId:4da9cc671c85a9162cfae0eadede40671b49e60380ebdadc3d8c8e04c289457d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243938438004628,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2ab6e312-e751-4154-896a-2c3458adbf4a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7e4c3d53,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3066e6a3fd9d9151a7a51ce0e5b76ddb3785c626fb6adb03936d74b15d21a88e,PodSandboxId:65c05501e4da9f3347814211b94742bfc11dd1ad08474ed103cb11eee60c5e6a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696243913322018969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hql58,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f,},Annotations:map[string]string{io.kubernetes.container.hash: b746a4f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e84d97e03c210f71efb9ebe74568c2e6fb709e9e86023642d9e3dfcb7a5f3a95,PodSandboxId:70113951e9a92d6d3e2ee162f4a5c733cf40c337fecbc7c3a68a70771ebed4f0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243903870662656,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-297mg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f31c9f-2263-4d80-95b3-99f21a3b9a02,},Annotations:map[string]string{io.kubernetes.container.hash: 2e007cf5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b9102567dc806dbf5053943443c9abe0dc554599723cb9988dcf68102d675,PodSandboxId:d808eeaee7aac705065951145643d5f098a866f28c2591f7eae4c333463ba6b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243902711487207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7v2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2349b09-e7a7-4938-8383-ea6704352ef7,},Annotations:map[string]string{io.kubernetes.container.hash: bc753a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40331f46e65bf468e25e7bd46fe9a41c7a895fbb26534e9aa7baf2d79e14bbf7,PodSandboxId:77143262afd6d5562c64fc0e3cc5a60086645da9745d846601a0be4fb6d8914f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243897103783668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f086f92d-617a-43d7-8b43-6733594b525f,},Annotations:map[string]string{io.kubernetes.container.hash: be945311,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82287d550719812d16807b4478cb9c0b152d753b8d906cb1920a7aa8cb765627,PodSandboxId:4f837d3d695648d0593ee827ae696a9a6f543d605ff2ff70d9ebedcdefb340d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696243896644879209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jlntv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba7ef520-293c-4034-8401-adedc8094092,},Annotations:map[string]string{io.kubernetes.container.hash: 9bf63382,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d7b4282ff2a26d6346dd0f8846
db0ae22e17075a868db3d2e71d771955493d,PodSandboxId:796ad17a2514cd959acf6494e035c3a7278e24c2ff7d68d002aa505574d37543,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696243896388390130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sfqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d25b02e-a589-4b0f-9ad6-715370b99993,},Annotations:map[string]string{io.kubernetes.container.hash: 3c641df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc10597bf1c35a6a9e427198633900fadc87497f0eac77894823a9cb76d6889d,Pod
SandboxId:9b33f71c896fbceafe25ac6d5e8db3add00eab32ba69501b8c65332a362ea8c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696243873131341575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7935c9b827a79ecbe5371cdd3e9be6,},Annotations:map[string]string{io.kubernetes.container.hash: bb73215e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0b9e200e953dc814ea4fea01ba6a5d74817e983d0458b3c6b899dddd03204,PodSandboxId:78d6b6505bc9165b740483ae771994037d34
987bcbdeb76bc1d0cb2977e405df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696243871722209391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977a1d8bbbe1a3f701d9d467ef3bb826363eb2ed6806b810e576427ecb9e6340,PodSandboxId:51573fe6e41c87cfd1163b006f304904bc1d70252f
84a7befa8de37e0699ac36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696243871753701052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a9c662b0294970a7e37f8f5149427f,},Annotations:map[string]string{io.kubernetes.container.hash: 77fd659,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94214bd4d923285887a148f0082389397bc4a32c92491453e88697ba57c9d8,PodSandboxId:d592ca8a646807298ddde3814fc92b1d8594a16fb1339a73c
eb2c5ac47800f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696243871575344513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6542bc51-d557-4a6d-863c-907b002ba407 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.782965872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e4e00ad-bf28-447f-a2c4-db6732014a7e name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.783035340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e4e00ad-bf28-447f-a2c4-db6732014a7e name=/runtime.v1.RuntimeService/Version
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.785201165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=58b90772-10cc-4389-b9d7-5021ad0ca6fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.785781848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244087785767364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=58b90772-10cc-4389-b9d7-5021ad0ca6fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.786820117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a657e23-ae38-4e77-bf70-9bc7d69c5105 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.786876063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a657e23-ae38-4e77-bf70-9bc7d69c5105 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 10:54:47 ingress-addon-legacy-982656 crio[719]: time="2023-10-02 10:54:47.787099815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4097d40f86bb8cb562e0af8fd923ec6bd0b2bb66824551ddd5917f150faaa074,PodSandboxId:eec4d02321858288b4a582e31f1ecc713df429d67fe7c2cf412e5a58ceb3ccce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696244080481135671,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-8f2h4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d42bfdc8-18c2-45f8-a4de-7a140fbfdac2,},Annotations:map[string]string{io.kubernetes.container.hash: af00d7b7,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454e2d729095e272664e5c18348523941b97b60f0123da428805def8ae50af4,PodSandboxId:4da9cc671c85a9162cfae0eadede40671b49e60380ebdadc3d8c8e04c289457d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696243938438004628,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2ab6e312-e751-4154-896a-2c3458adbf4a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7e4c3d53,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3066e6a3fd9d9151a7a51ce0e5b76ddb3785c626fb6adb03936d74b15d21a88e,PodSandboxId:65c05501e4da9f3347814211b94742bfc11dd1ad08474ed103cb11eee60c5e6a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696243913322018969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hql58,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f,},Annotations:map[string]string{io.kubernetes.container.hash: b746a4f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e84d97e03c210f71efb9ebe74568c2e6fb709e9e86023642d9e3dfcb7a5f3a95,PodSandboxId:70113951e9a92d6d3e2ee162f4a5c733cf40c337fecbc7c3a68a70771ebed4f0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243903870662656,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-297mg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 63f31c9f-2263-4d80-95b3-99f21a3b9a02,},Annotations:map[string]string{io.kubernetes.container.hash: 2e007cf5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:136b9102567dc806dbf5053943443c9abe0dc554599723cb9988dcf68102d675,PodSandboxId:d808eeaee7aac705065951145643d5f098a866f28c2591f7eae4c333463ba6b9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696243902711487207,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7v2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d2349b09-e7a7-4938-8383-ea6704352ef7,},Annotations:map[string]string{io.kubernetes.container.hash: bc753a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40331f46e65bf468e25e7bd46fe9a41c7a895fbb26534e9aa7baf2d79e14bbf7,PodSandboxId:77143262afd6d5562c64fc0e3cc5a60086645da9745d846601a0be4fb6d8914f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696243897103783668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f086f92d-617a-43d7-8b43-6733594b525f,},Annotations:map[string]string{io.kubernetes.container.hash: be945311,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82287d550719812d16807b4478cb9c0b152d753b8d906cb1920a7aa8cb765627,PodSandboxId:4f837d3d695648d0593ee827ae696a9a6f543d605ff2ff70d9ebedcdefb340d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696243896644879209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-jlntv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba7ef520-293c-4034-8401-adedc8094092,},Annotations:map[string]string{io.kubernetes.container.hash: 9bf63382,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d7b4282ff2a26d6346dd0f8846
db0ae22e17075a868db3d2e71d771955493d,PodSandboxId:796ad17a2514cd959acf6494e035c3a7278e24c2ff7d68d002aa505574d37543,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696243896388390130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sfqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d25b02e-a589-4b0f-9ad6-715370b99993,},Annotations:map[string]string{io.kubernetes.container.hash: 3c641df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc10597bf1c35a6a9e427198633900fadc87497f0eac77894823a9cb76d6889d,Pod
SandboxId:9b33f71c896fbceafe25ac6d5e8db3add00eab32ba69501b8c65332a362ea8c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696243873131341575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7935c9b827a79ecbe5371cdd3e9be6,},Annotations:map[string]string{io.kubernetes.container.hash: bb73215e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a0b9e200e953dc814ea4fea01ba6a5d74817e983d0458b3c6b899dddd03204,PodSandboxId:78d6b6505bc9165b740483ae771994037d34
987bcbdeb76bc1d0cb2977e405df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696243871722209391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977a1d8bbbe1a3f701d9d467ef3bb826363eb2ed6806b810e576427ecb9e6340,PodSandboxId:51573fe6e41c87cfd1163b006f304904bc1d70252f
84a7befa8de37e0699ac36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696243871753701052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a9c662b0294970a7e37f8f5149427f,},Annotations:map[string]string{io.kubernetes.container.hash: 77fd659,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed94214bd4d923285887a148f0082389397bc4a32c92491453e88697ba57c9d8,PodSandboxId:d592ca8a646807298ddde3814fc92b1d8594a16fb1339a73c
eb2c5ac47800f6a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696243871575344513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-982656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a657e23-ae38-4e77-bf70-9bc7d69c5105 name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4097d40f86bb8       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            7 seconds ago       Running             hello-world-app           0                   eec4d02321858       hello-world-app-5f5d8b66bb-8f2h4
	0454e2d729095       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                    2 minutes ago       Running             nginx                     0                   4da9cc671c85a       nginx
	3066e6a3fd9d9       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   65c05501e4da9       ingress-nginx-controller-7fcf777cb7-hql58
	e84d97e03c210       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   70113951e9a92       ingress-nginx-admission-patch-297mg
	136b9102567dc       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   d808eeaee7aac       ingress-nginx-admission-create-kj7v2
	40331f46e65bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   77143262afd6d       storage-provisioner
	82287d5507198       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   4f837d3d69564       coredns-66bff467f8-jlntv
	75d7b4282ff2a       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   796ad17a2514c       kube-proxy-6sfqr
	bc10597bf1c35       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   9b33f71c896fb       etcd-ingress-addon-legacy-982656
	977a1d8bbbe1a       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   51573fe6e41c8       kube-apiserver-ingress-addon-legacy-982656
	f6a0b9e200e95       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   78d6b6505bc91       kube-scheduler-ingress-addon-legacy-982656
	ed94214bd4d92       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   d592ca8a64680       kube-controller-manager-ingress-addon-legacy-982656
	
	* 
	* ==> coredns [82287d550719812d16807b4478cb9c0b152d753b8d906cb1920a7aa8cb765627] <==
	* [INFO] 10.244.0.5:34623 - 28385 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000181481s
	[INFO] 10.244.0.5:34623 - 51144 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000174888s
	[INFO] 10.244.0.5:43752 - 61004 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081495s
	[INFO] 10.244.0.5:43752 - 49760 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074515s
	[INFO] 10.244.0.5:34623 - 59634 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000136453s
	[INFO] 10.244.0.5:43752 - 22116 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061855s
	[INFO] 10.244.0.5:34623 - 55029 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000144669s
	[INFO] 10.244.0.5:43752 - 46614 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078221s
	[INFO] 10.244.0.5:43752 - 41558 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087516s
	[INFO] 10.244.0.5:34623 - 35143 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.5:43752 - 38254 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010085s
	[INFO] 10.244.0.5:38615 - 8572 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117263s
	[INFO] 10.244.0.5:59482 - 31116 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005389s
	[INFO] 10.244.0.5:59482 - 7181 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061633s
	[INFO] 10.244.0.5:38615 - 56830 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032016s
	[INFO] 10.244.0.5:38615 - 20045 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005591s
	[INFO] 10.244.0.5:38615 - 19267 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086779s
	[INFO] 10.244.0.5:59482 - 28656 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000179943s
	[INFO] 10.244.0.5:38615 - 4892 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070625s
	[INFO] 10.244.0.5:38615 - 48904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061142s
	[INFO] 10.244.0.5:59482 - 11680 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054481s
	[INFO] 10.244.0.5:38615 - 29830 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010178s
	[INFO] 10.244.0.5:59482 - 53660 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005581s
	[INFO] 10.244.0.5:59482 - 18775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044744s
	[INFO] 10.244.0.5:59482 - 24484 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071865s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-982656
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-982656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=ingress-addon-legacy-982656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T10_51_20_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 10:51:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-982656
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 10:54:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 10:52:21 +0000   Mon, 02 Oct 2023 10:51:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 10:52:21 +0000   Mon, 02 Oct 2023 10:51:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 10:52:21 +0000   Mon, 02 Oct 2023 10:51:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 10:52:21 +0000   Mon, 02 Oct 2023 10:51:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ingress-addon-legacy-982656
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 9cc0b01046cb4361a16653d13394e139
	  System UUID:                9cc0b010-46cb-4361-a166-53d13394e139
	  Boot ID:                    aa10452c-7fa7-44b5-b984-74d50c1d2597
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-8f2h4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 coredns-66bff467f8-jlntv                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m13s
	  kube-system                 etcd-ingress-addon-legacy-982656                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-982656             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-982656    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-6sfqr                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-ingress-addon-legacy-982656             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m38s (x5 over 3m38s)  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s (x5 over 3m38s)  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s (x4 over 3m38s)  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m28s                  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s                  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s                  kubelet     Node ingress-addon-legacy-982656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m18s                  kubelet     Node ingress-addon-legacy-982656 status is now: NodeReady
	  Normal  Starting                 3m12s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 2 10:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.099555] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.394551] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.476651] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146644] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.313413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.824718] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.104064] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.137499] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.094784] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.201859] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[Oct 2 10:51] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +3.178630] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.887042] systemd-fstab-generator[1421]: Ignoring "noauto" for root device
	[ +16.329786] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.567131] kauditd_printk_skb: 25 callbacks suppressed
	[Oct 2 10:52] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.017899] kauditd_printk_skb: 3 callbacks suppressed
	[Oct 2 10:54] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [bc10597bf1c35a6a9e427198633900fadc87497f0eac77894823a9cb76d6889d] <==
	* raft2023/10/02 10:51:13 INFO: 6a82bbfd8eee2a80 became follower at term 1
	raft2023/10/02 10:51:13 INFO: 6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616)
	2023-10-02 10:51:13.303885 W | auth: simple token is not cryptographically signed
	2023-10-02 10:51:13.312224 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-02 10:51:13.314222 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 10:51:13.314675 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-02 10:51:13.314865 I | etcdserver: 6a82bbfd8eee2a80 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 10:51:13.315181 I | embed: listening for peers on 192.168.39.231:2380
	raft2023/10/02 10:51:13 INFO: 6a82bbfd8eee2a80 switched to configuration voters=(7674903412691839616)
	2023-10-02 10:51:13.315674 I | etcdserver/membership: added member 6a82bbfd8eee2a80 [https://192.168.39.231:2380] to cluster 1a20717615099fdd
	raft2023/10/02 10:51:14 INFO: 6a82bbfd8eee2a80 is starting a new election at term 1
	raft2023/10/02 10:51:14 INFO: 6a82bbfd8eee2a80 became candidate at term 2
	raft2023/10/02 10:51:14 INFO: 6a82bbfd8eee2a80 received MsgVoteResp from 6a82bbfd8eee2a80 at term 2
	raft2023/10/02 10:51:14 INFO: 6a82bbfd8eee2a80 became leader at term 2
	raft2023/10/02 10:51:14 INFO: raft.node: 6a82bbfd8eee2a80 elected leader 6a82bbfd8eee2a80 at term 2
	2023-10-02 10:51:14.194920 I | etcdserver: published {Name:ingress-addon-legacy-982656 ClientURLs:[https://192.168.39.231:2379]} to cluster 1a20717615099fdd
	2023-10-02 10:51:14.195119 I | embed: ready to serve client requests
	2023-10-02 10:51:14.195201 I | embed: ready to serve client requests
	2023-10-02 10:51:14.196544 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 10:51:14.197396 I | embed: serving client requests on 192.168.39.231:2379
	2023-10-02 10:51:14.197649 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-02 10:51:14.198630 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-02 10:51:14.198713 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-02 10:51:34.921923 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" " with result "range_response_count:1 size:236" took too long (207.170698ms) to execute
	2023-10-02 10:51:35.794191 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (185.173004ms) to execute
	
	* 
	* ==> kernel <==
	*  10:54:48 up 4 min,  0 users,  load average: 0.61, 0.43, 0.19
	Linux ingress-addon-legacy-982656 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [977a1d8bbbe1a3f701d9d467ef3bb826363eb2ed6806b810e576427ecb9e6340] <==
	* I1002 10:51:18.086298       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 10:51:18.096672       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1002 10:51:18.101715       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1002 10:51:18.101783       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1002 10:51:18.585182       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 10:51:18.626117       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 10:51:18.771381       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.231]
	I1002 10:51:18.772161       1 controller.go:609] quota admission added evaluator for: endpoints
	I1002 10:51:18.778266       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 10:51:19.465926       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1002 10:51:20.287093       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1002 10:51:20.376295       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1002 10:51:20.723368       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 10:51:35.358513       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1002 10:51:35.556992       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1002 10:51:36.011638       1 trace.go:116] Trace[1746591658]: "GuaranteedUpdate etcd3" type:*core.Node (started: 2023-10-02 10:51:35.490950319 +0000 UTC m=+23.578061628) (total time: 520.658177ms):
	Trace[1746591658]: [127.161724ms] [125.942895ms] Transaction committed
	Trace[1746591658]: [422.626474ms] [294.312277ms] Transaction committed
	Trace[1746591658]: [520.609567ms] [96.660133ms] Transaction committed
	I1002 10:51:36.012921       1 trace.go:116] Trace[2063973957]: "Patch" url:/api/v1/nodes/ingress-addon-legacy-982656,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:node-controller,client:192.168.39.231 (started: 2023-10-02 10:51:35.490735349 +0000 UTC m=+23.577846763) (total time: 522.155564ms):
	Trace[2063973957]: [127.417689ms] [126.232559ms] About to apply patch
	Trace[2063973957]: [422.906884ms] [294.634138ms] About to apply patch
	Trace[2063973957]: [521.490483ms] [97.589385ms] Object stored in database
	I1002 10:51:38.033302       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1002 10:52:12.728317       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ed94214bd4d923285887a148f0082389397bc4a32c92491453e88697ba57c9d8] <==
	* I1002 10:51:35.461161       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1002 10:51:35.461232       1 shared_informer.go:230] Caches are synced for GC 
	I1002 10:51:35.461309       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1002 10:51:35.462349       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-982656", UID:"5c2e8e5f-8d0b-49b3-b5fd-ac871622b06d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-982656 event: Registered Node ingress-addon-legacy-982656 in Controller
	I1002 10:51:35.554500       1 shared_informer.go:230] Caches are synced for deployment 
	I1002 10:51:35.562292       1 shared_informer.go:230] Caches are synced for disruption 
	I1002 10:51:35.562329       1 disruption.go:339] Sending events to api server.
	I1002 10:51:35.580766       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I1002 10:51:35.635909       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:51:35.636121       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 10:51:35.657076       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:51:35.657133       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 10:51:35.657139       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 10:51:35.890721       1 range_allocator.go:373] Set node ingress-addon-legacy-982656 PodCIDR to [10.244.0.0/24]
	I1002 10:51:35.891715       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"cbfc1269-4a25-404a-b898-2d95a2f1464f", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1002 10:51:35.970552       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0be7a049-f874-4869-b3a9-3b1cba556a31", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-jlntv
	I1002 10:51:38.007138       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d498b953-2f27-415a-bb08-91d9fb2fc347", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1002 10:51:38.049348       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"bc38f11e-e015-438a-ba99-7f4fa7d8a631", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hql58
	I1002 10:51:38.068185       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"22feef1a-77eb-4a77-a3f0-b5561718a72c", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-kj7v2
	I1002 10:51:38.134396       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f7fd6de4-4b67-451c-b3df-c0458504d2a0", APIVersion:"batch/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-297mg
	I1002 10:51:42.989712       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"22feef1a-77eb-4a77-a3f0-b5561718a72c", APIVersion:"batch/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:51:45.010698       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f7fd6de4-4b67-451c-b3df-c0458504d2a0", APIVersion:"batch/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 10:54:36.112384       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"a348b442-728d-47f4-a877-c4169fb0a4e7", APIVersion:"apps/v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1002 10:54:36.124792       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"168e8b47-6698-4839-a51a-4e631f8016f5", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-8f2h4
	E1002 10:54:45.109484       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-26bnj" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [75d7b4282ff2a26d6346dd0f8846db0ae22e17075a868db3d2e71d771955493d] <==
	* W1002 10:51:36.833097       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1002 10:51:36.841946       1 node.go:136] Successfully retrieved node IP: 192.168.39.231
	I1002 10:51:36.841999       1 server_others.go:186] Using iptables Proxier.
	I1002 10:51:36.842179       1 server.go:583] Version: v1.18.20
	I1002 10:51:36.843917       1 config.go:133] Starting endpoints config controller
	I1002 10:51:36.843961       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1002 10:51:36.843985       1 config.go:315] Starting service config controller
	I1002 10:51:36.843988       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1002 10:51:36.951642       1 shared_informer.go:230] Caches are synced for service config 
	I1002 10:51:36.951874       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f6a0b9e200e953dc814ea4fea01ba6a5d74817e983d0458b3c6b899dddd03204] <==
	* I1002 10:51:17.213487       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1002 10:51:17.213589       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:51:17.213613       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 10:51:17.213635       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 10:51:17.217010       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 10:51:17.217118       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:51:17.217204       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 10:51:17.217276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 10:51:17.217345       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 10:51:17.217474       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 10:51:17.217576       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:51:17.217659       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:51:17.217804       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:51:17.217866       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 10:51:17.217914       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:51:17.217966       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:51:18.086007       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 10:51:18.089878       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 10:51:18.252260       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 10:51:18.272764       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 10:51:18.339834       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 10:51:18.392032       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 10:51:18.428795       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1002 10:51:20.113858       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1002 10:51:36.052854       1 factory.go:503] pod: kube-system/coredns-66bff467f8-jlntv is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 10:50:45 UTC, ends at Mon 2023-10-02 10:54:48 UTC. --
	Oct 02 10:51:45 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:51:45.286067    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-qnhd5" (UniqueName: "kubernetes.io/secret/63f31c9f-2263-4d80-95b3-99f21a3b9a02-ingress-nginx-admission-token-qnhd5") on node "ingress-addon-legacy-982656" DevicePath ""
	Oct 02 10:51:45 ingress-addon-legacy-982656 kubelet[1428]: W1002 10:51:45.999171    1428 pod_container_deletor.go:77] Container "70113951e9a92d6d3e2ee162f4a5c733cf40c337fecbc7c3a68a70771ebed4f0" not found in pod's containers
	Oct 02 10:51:55 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:51:55.299784    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 02 10:51:55 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:51:55.424838    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-dr2sc" (UniqueName: "kubernetes.io/secret/9d240fc9-440d-4957-91f5-87c392cac144-minikube-ingress-dns-token-dr2sc") pod "kube-ingress-dns-minikube" (UID: "9d240fc9-440d-4957-91f5-87c392cac144")
	Oct 02 10:52:12 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:52:12.908330    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 02 10:52:13 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:52:13.083288    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-m6srx" (UniqueName: "kubernetes.io/secret/2ab6e312-e751-4154-896a-2c3458adbf4a-default-token-m6srx") pod "nginx" (UID: "2ab6e312-e751-4154-896a-2c3458adbf4a")
	Oct 02 10:54:36 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:36.130508    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 02 10:54:36 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:36.258008    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-m6srx" (UniqueName: "kubernetes.io/secret/d42bfdc8-18c2-45f8-a4de-7a140fbfdac2-default-token-m6srx") pod "hello-world-app-5f5d8b66bb-8f2h4" (UID: "d42bfdc8-18c2-45f8-a4de-7a140fbfdac2")
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:37.999522    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:38.036585    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: E1002 10:54:38.037102    1428 remote_runtime.go:295] ContainerStatus "e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd" from runtime service failed: rpc error: code = NotFound desc = could not find container "e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd": container with ID starting with e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd not found: ID does not exist
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:38.164859    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-dr2sc" (UniqueName: "kubernetes.io/secret/9d240fc9-440d-4957-91f5-87c392cac144-minikube-ingress-dns-token-dr2sc") pod "9d240fc9-440d-4957-91f5-87c392cac144" (UID: "9d240fc9-440d-4957-91f5-87c392cac144")
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:38.180386    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d240fc9-440d-4957-91f5-87c392cac144-minikube-ingress-dns-token-dr2sc" (OuterVolumeSpecName: "minikube-ingress-dns-token-dr2sc") pod "9d240fc9-440d-4957-91f5-87c392cac144" (UID: "9d240fc9-440d-4957-91f5-87c392cac144"). InnerVolumeSpecName "minikube-ingress-dns-token-dr2sc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:38.265211    1428 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-dr2sc" (UniqueName: "kubernetes.io/secret/9d240fc9-440d-4957-91f5-87c392cac144-minikube-ingress-dns-token-dr2sc") on node "ingress-addon-legacy-982656" DevicePath ""
	Oct 02 10:54:38 ingress-addon-legacy-982656 kubelet[1428]: E1002 10:54:38.895800    1428 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd\": container with ID starting with e9fcb0fea946332b956a0e365d062fbaa2f9b3f12fbc5e341299037e2fa6b9cd not found: ID does not exist"
	Oct 02 10:54:40 ingress-addon-legacy-982656 kubelet[1428]: E1002 10:54:40.443572    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hql58.178a45010aaf0d95", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hql58", UID:"3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-982656"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec63c19ecad95, ext:200182001249, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec63c19ecad95, ext:200182001249, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hql58.178a45010aaf0d95" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:54:40 ingress-addon-legacy-982656 kubelet[1428]: E1002 10:54:40.493879    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hql58.178a45010aaf0d95", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hql58", UID:"3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f", APIVersion:"v1", ResourceVersion:"421", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-982656"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13ec63c19ecad95, ext:200182001249, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13ec63c1bbb8559, ext:200212334117, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hql58.178a45010aaf0d95" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 10:54:43 ingress-addon-legacy-982656 kubelet[1428]: W1002 10:54:43.089198    1428 pod_container_deletor.go:77] Container "65c05501e4da9f3347814211b94742bfc11dd1ad08474ed103cb11eee60c5e6a" not found in pod's containers
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.586675    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-webhook-cert") pod "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f" (UID: "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f")
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.586725    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-rrq8r" (UniqueName: "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-ingress-nginx-token-rrq8r") pod "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f" (UID: "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f")
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.589587    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f" (UID: "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.590669    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-ingress-nginx-token-rrq8r" (OuterVolumeSpecName: "ingress-nginx-token-rrq8r") pod "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f" (UID: "3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f"). InnerVolumeSpecName "ingress-nginx-token-rrq8r". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.687075    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-token-rrq8r" (UniqueName: "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-ingress-nginx-token-rrq8r") on node "ingress-addon-legacy-982656" DevicePath ""
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: I1002 10:54:44.687113    1428 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f-webhook-cert") on node "ingress-addon-legacy-982656" DevicePath ""
	Oct 02 10:54:44 ingress-addon-legacy-982656 kubelet[1428]: W1002 10:54:44.891903    1428 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/3386732c-8a1d-40ce-a39e-2c6d9aaf1f3f/volumes" does not exist
	
	* 
	* ==> storage-provisioner [40331f46e65bf468e25e7bd46fe9a41c7a895fbb26534e9aa7baf2d79e14bbf7] <==
	* I1002 10:51:37.223395       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 10:51:37.234578       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 10:51:37.234622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 10:51:37.248780       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 10:51:37.249029       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-982656_a67b8c9e-3414-4945-ad5c-3d3da8862067!
	I1002 10:51:37.249683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a4e3408-53c9-4692-8020-02601e790cef", APIVersion:"v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-982656_a67b8c9e-3414-4945-ad5c-3d3da8862067 became leader
	I1002 10:51:37.349460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-982656_a67b8c9e-3414-4945-ad5c-3d3da8862067!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-982656 -n ingress-addon-legacy-982656
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-982656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (173.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- sh -c "ping -c 1 192.168.39.1": exit status 1 (170.055918ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-h45vs): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- sh -c "ping -c 1 192.168.39.1": exit status 1 (178.606948ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-jjswt): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-224116 -n multinode-224116
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 logs -n 25
E1002 11:01:55.305584  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-224116 logs -n 25: (1.450208706s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-461003 ssh -- ls                    | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-461003 ssh --                       | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-461003                           | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	| start   | -p mount-start-2-461003                           | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC |                     |
	|         | --profile mount-start-2-461003                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-461003 ssh -- ls                    | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-461003 ssh --                       | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-461003                           | mount-start-2-461003 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	| delete  | -p mount-start-1-442328                           | mount-start-1-442328 | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 10:59 UTC |
	| start   | -p multinode-224116                               | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 10:59 UTC | 02 Oct 23 11:01 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- apply -f                   | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- rollout                    | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- get pods -o                | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- get pods -o                | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-h45vs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-jjswt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-h45vs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-jjswt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-h45vs -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-jjswt -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- get pods -o                | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-h45vs                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC |                     |
	|         | busybox-5bc68d56bd-h45vs -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC | 02 Oct 23 11:01 UTC |
	|         | busybox-5bc68d56bd-jjswt                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-224116 -- exec                       | multinode-224116     | jenkins | v1.31.2 | 02 Oct 23 11:01 UTC |                     |
	|         | busybox-5bc68d56bd-jjswt -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:59:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:59:54.619292  352564 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:59:54.619401  352564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:59:54.619410  352564 out.go:309] Setting ErrFile to fd 2...
	I1002 10:59:54.619415  352564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:59:54.619612  352564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 10:59:54.620188  352564 out.go:303] Setting JSON to false
	I1002 10:59:54.621204  352564 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6141,"bootTime":1696238254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:59:54.621269  352564 start.go:138] virtualization: kvm guest
	I1002 10:59:54.623676  352564 out.go:177] * [multinode-224116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:59:54.625158  352564 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:59:54.626545  352564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:59:54.625221  352564 notify.go:220] Checking for updates...
	I1002 10:59:54.628977  352564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:59:54.630336  352564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:59:54.631897  352564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 10:59:54.633392  352564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:59:54.634996  352564 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:59:54.669409  352564 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 10:59:54.670961  352564 start.go:298] selected driver: kvm2
	I1002 10:59:54.670978  352564 start.go:902] validating driver "kvm2" against <nil>
	I1002 10:59:54.670994  352564 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:59:54.671930  352564 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:59:54.672017  352564 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 10:59:54.686604  352564 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 10:59:54.686681  352564 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:59:54.686933  352564 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 10:59:54.686976  352564 cni.go:84] Creating CNI manager for ""
	I1002 10:59:54.686987  352564 cni.go:136] 0 nodes found, recommending kindnet
	I1002 10:59:54.686992  352564 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 10:59:54.687003  352564 start_flags.go:321] config:
	{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:59:54.687129  352564 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:59:54.688878  352564 out.go:177] * Starting control plane node multinode-224116 in cluster multinode-224116
	I1002 10:59:54.690097  352564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 10:59:54.690144  352564 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 10:59:54.690154  352564 cache.go:57] Caching tarball of preloaded images
	I1002 10:59:54.690230  352564 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 10:59:54.690240  352564 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 10:59:54.690667  352564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 10:59:54.690695  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json: {Name:mk1f8d79decec32c2a1de8c81db1e5114ca7d2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:59:54.690824  352564 start.go:365] acquiring machines lock for multinode-224116: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 10:59:54.690850  352564 start.go:369] acquired machines lock for "multinode-224116" in 14.708µs
	I1002 10:59:54.690870  352564 start.go:93] Provisioning new machine with config: &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 10:59:54.690932  352564 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 10:59:54.692733  352564 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 10:59:54.692869  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:59:54.692914  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:59:54.706856  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I1002 10:59:54.707274  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:59:54.707781  352564 main.go:141] libmachine: Using API Version  1
	I1002 10:59:54.707807  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:59:54.708163  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:59:54.708359  352564 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 10:59:54.708519  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 10:59:54.708683  352564 start.go:159] libmachine.API.Create for "multinode-224116" (driver="kvm2")
	I1002 10:59:54.708712  352564 client.go:168] LocalClient.Create starting
	I1002 10:59:54.708746  352564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 10:59:54.708796  352564 main.go:141] libmachine: Decoding PEM data...
	I1002 10:59:54.708823  352564 main.go:141] libmachine: Parsing certificate...
	I1002 10:59:54.708890  352564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 10:59:54.708918  352564 main.go:141] libmachine: Decoding PEM data...
	I1002 10:59:54.708937  352564 main.go:141] libmachine: Parsing certificate...
	I1002 10:59:54.708966  352564 main.go:141] libmachine: Running pre-create checks...
	I1002 10:59:54.708980  352564 main.go:141] libmachine: (multinode-224116) Calling .PreCreateCheck
	I1002 10:59:54.709331  352564 main.go:141] libmachine: (multinode-224116) Calling .GetConfigRaw
	I1002 10:59:54.709930  352564 main.go:141] libmachine: Creating machine...
	I1002 10:59:54.709955  352564 main.go:141] libmachine: (multinode-224116) Calling .Create
	I1002 10:59:54.710941  352564 main.go:141] libmachine: (multinode-224116) Creating KVM machine...
	I1002 10:59:54.712183  352564 main.go:141] libmachine: (multinode-224116) DBG | found existing default KVM network
	I1002 10:59:54.712854  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:54.712708  352587 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I1002 10:59:54.718462  352564 main.go:141] libmachine: (multinode-224116) DBG | trying to create private KVM network mk-multinode-224116 192.168.39.0/24...
	I1002 10:59:54.788602  352564 main.go:141] libmachine: (multinode-224116) DBG | private KVM network mk-multinode-224116 192.168.39.0/24 created
	I1002 10:59:54.788642  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:54.788558  352587 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:59:54.788656  352564 main.go:141] libmachine: (multinode-224116) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116 ...
	I1002 10:59:54.788680  352564 main.go:141] libmachine: (multinode-224116) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 10:59:54.788700  352564 main.go:141] libmachine: (multinode-224116) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 10:59:55.015615  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:55.015489  352587 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa...
	I1002 10:59:55.350795  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:55.350620  352587 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/multinode-224116.rawdisk...
	I1002 10:59:55.350834  352564 main.go:141] libmachine: (multinode-224116) DBG | Writing magic tar header
	I1002 10:59:55.350854  352564 main.go:141] libmachine: (multinode-224116) DBG | Writing SSH key tar header
	I1002 10:59:55.351351  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:55.351235  352587 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116 ...
	I1002 10:59:55.351393  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116
	I1002 10:59:55.351408  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116 (perms=drwx------)
	I1002 10:59:55.351436  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 10:59:55.351451  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 10:59:55.351468  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:59:55.351480  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 10:59:55.351503  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 10:59:55.351525  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 10:59:55.351537  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 10:59:55.351556  352564 main.go:141] libmachine: (multinode-224116) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 10:59:55.351571  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 10:59:55.351582  352564 main.go:141] libmachine: (multinode-224116) Creating domain...
	I1002 10:59:55.351603  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home/jenkins
	I1002 10:59:55.351616  352564 main.go:141] libmachine: (multinode-224116) DBG | Checking permissions on dir: /home
	I1002 10:59:55.351633  352564 main.go:141] libmachine: (multinode-224116) DBG | Skipping /home - not owner
	I1002 10:59:55.352888  352564 main.go:141] libmachine: (multinode-224116) define libvirt domain using xml: 
	I1002 10:59:55.352916  352564 main.go:141] libmachine: (multinode-224116) <domain type='kvm'>
	I1002 10:59:55.352930  352564 main.go:141] libmachine: (multinode-224116)   <name>multinode-224116</name>
	I1002 10:59:55.352940  352564 main.go:141] libmachine: (multinode-224116)   <memory unit='MiB'>2200</memory>
	I1002 10:59:55.352955  352564 main.go:141] libmachine: (multinode-224116)   <vcpu>2</vcpu>
	I1002 10:59:55.352966  352564 main.go:141] libmachine: (multinode-224116)   <features>
	I1002 10:59:55.352974  352564 main.go:141] libmachine: (multinode-224116)     <acpi/>
	I1002 10:59:55.352982  352564 main.go:141] libmachine: (multinode-224116)     <apic/>
	I1002 10:59:55.353001  352564 main.go:141] libmachine: (multinode-224116)     <pae/>
	I1002 10:59:55.353026  352564 main.go:141] libmachine: (multinode-224116)     
	I1002 10:59:55.353041  352564 main.go:141] libmachine: (multinode-224116)   </features>
	I1002 10:59:55.353051  352564 main.go:141] libmachine: (multinode-224116)   <cpu mode='host-passthrough'>
	I1002 10:59:55.353062  352564 main.go:141] libmachine: (multinode-224116)   
	I1002 10:59:55.353074  352564 main.go:141] libmachine: (multinode-224116)   </cpu>
	I1002 10:59:55.353086  352564 main.go:141] libmachine: (multinode-224116)   <os>
	I1002 10:59:55.353104  352564 main.go:141] libmachine: (multinode-224116)     <type>hvm</type>
	I1002 10:59:55.353118  352564 main.go:141] libmachine: (multinode-224116)     <boot dev='cdrom'/>
	I1002 10:59:55.353131  352564 main.go:141] libmachine: (multinode-224116)     <boot dev='hd'/>
	I1002 10:59:55.353144  352564 main.go:141] libmachine: (multinode-224116)     <bootmenu enable='no'/>
	I1002 10:59:55.353156  352564 main.go:141] libmachine: (multinode-224116)   </os>
	I1002 10:59:55.353167  352564 main.go:141] libmachine: (multinode-224116)   <devices>
	I1002 10:59:55.353176  352564 main.go:141] libmachine: (multinode-224116)     <disk type='file' device='cdrom'>
	I1002 10:59:55.353184  352564 main.go:141] libmachine: (multinode-224116)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/boot2docker.iso'/>
	I1002 10:59:55.353194  352564 main.go:141] libmachine: (multinode-224116)       <target dev='hdc' bus='scsi'/>
	I1002 10:59:55.353201  352564 main.go:141] libmachine: (multinode-224116)       <readonly/>
	I1002 10:59:55.353242  352564 main.go:141] libmachine: (multinode-224116)     </disk>
	I1002 10:59:55.353273  352564 main.go:141] libmachine: (multinode-224116)     <disk type='file' device='disk'>
	I1002 10:59:55.353292  352564 main.go:141] libmachine: (multinode-224116)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 10:59:55.353310  352564 main.go:141] libmachine: (multinode-224116)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/multinode-224116.rawdisk'/>
	I1002 10:59:55.353325  352564 main.go:141] libmachine: (multinode-224116)       <target dev='hda' bus='virtio'/>
	I1002 10:59:55.353339  352564 main.go:141] libmachine: (multinode-224116)     </disk>
	I1002 10:59:55.353353  352564 main.go:141] libmachine: (multinode-224116)     <interface type='network'>
	I1002 10:59:55.353367  352564 main.go:141] libmachine: (multinode-224116)       <source network='mk-multinode-224116'/>
	I1002 10:59:55.353383  352564 main.go:141] libmachine: (multinode-224116)       <model type='virtio'/>
	I1002 10:59:55.353395  352564 main.go:141] libmachine: (multinode-224116)     </interface>
	I1002 10:59:55.353417  352564 main.go:141] libmachine: (multinode-224116)     <interface type='network'>
	I1002 10:59:55.353439  352564 main.go:141] libmachine: (multinode-224116)       <source network='default'/>
	I1002 10:59:55.353454  352564 main.go:141] libmachine: (multinode-224116)       <model type='virtio'/>
	I1002 10:59:55.353465  352564 main.go:141] libmachine: (multinode-224116)     </interface>
	I1002 10:59:55.353471  352564 main.go:141] libmachine: (multinode-224116)     <serial type='pty'>
	I1002 10:59:55.353479  352564 main.go:141] libmachine: (multinode-224116)       <target port='0'/>
	I1002 10:59:55.353485  352564 main.go:141] libmachine: (multinode-224116)     </serial>
	I1002 10:59:55.353494  352564 main.go:141] libmachine: (multinode-224116)     <console type='pty'>
	I1002 10:59:55.353505  352564 main.go:141] libmachine: (multinode-224116)       <target type='serial' port='0'/>
	I1002 10:59:55.353526  352564 main.go:141] libmachine: (multinode-224116)     </console>
	I1002 10:59:55.353541  352564 main.go:141] libmachine: (multinode-224116)     <rng model='virtio'>
	I1002 10:59:55.353548  352564 main.go:141] libmachine: (multinode-224116)       <backend model='random'>/dev/random</backend>
	I1002 10:59:55.353557  352564 main.go:141] libmachine: (multinode-224116)     </rng>
	I1002 10:59:55.353561  352564 main.go:141] libmachine: (multinode-224116)     
	I1002 10:59:55.353575  352564 main.go:141] libmachine: (multinode-224116)     
	I1002 10:59:55.353587  352564 main.go:141] libmachine: (multinode-224116)   </devices>
	I1002 10:59:55.353599  352564 main.go:141] libmachine: (multinode-224116) </domain>
	I1002 10:59:55.353615  352564 main.go:141] libmachine: (multinode-224116) 
	I1002 10:59:55.357874  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:ca:1f:08 in network default
	I1002 10:59:55.358523  352564 main.go:141] libmachine: (multinode-224116) Ensuring networks are active...
	I1002 10:59:55.358559  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:55.359226  352564 main.go:141] libmachine: (multinode-224116) Ensuring network default is active
	I1002 10:59:55.359529  352564 main.go:141] libmachine: (multinode-224116) Ensuring network mk-multinode-224116 is active
	I1002 10:59:55.359986  352564 main.go:141] libmachine: (multinode-224116) Getting domain xml...
	I1002 10:59:55.360622  352564 main.go:141] libmachine: (multinode-224116) Creating domain...
	I1002 10:59:56.571599  352564 main.go:141] libmachine: (multinode-224116) Waiting to get IP...
	I1002 10:59:56.572464  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:56.572876  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:56.572911  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:56.572856  352587 retry.go:31] will retry after 299.323903ms: waiting for machine to come up
	I1002 10:59:56.873413  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:56.873840  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:56.873872  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:56.873793  352587 retry.go:31] will retry after 333.141218ms: waiting for machine to come up
	I1002 10:59:57.208581  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:57.209014  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:57.209037  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:57.208987  352587 retry.go:31] will retry after 327.393011ms: waiting for machine to come up
	I1002 10:59:57.537513  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:57.538043  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:57.538069  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:57.537978  352587 retry.go:31] will retry after 536.103744ms: waiting for machine to come up
	I1002 10:59:58.075438  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:58.075933  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:58.075963  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:58.075883  352587 retry.go:31] will retry after 615.153499ms: waiting for machine to come up
	I1002 10:59:58.692613  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:58.693031  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:58.693056  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:58.692972  352587 retry.go:31] will retry after 881.649585ms: waiting for machine to come up
	I1002 10:59:59.575792  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 10:59:59.576225  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 10:59:59.576256  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 10:59:59.576187  352587 retry.go:31] will retry after 760.220403ms: waiting for machine to come up
	I1002 11:00:00.337712  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:00.338097  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:00.338134  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:00.338038  352587 retry.go:31] will retry after 1.185923372s: waiting for machine to come up
	I1002 11:00:01.525472  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:01.525889  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:01.525925  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:01.525813  352587 retry.go:31] will retry after 1.733149193s: waiting for machine to come up
	I1002 11:00:03.261748  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:03.262198  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:03.262229  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:03.262150  352587 retry.go:31] will retry after 1.471414016s: waiting for machine to come up
	I1002 11:00:04.734844  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:04.735338  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:04.735381  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:04.735279  352587 retry.go:31] will retry after 1.929017244s: waiting for machine to come up
	I1002 11:00:06.666323  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:06.666815  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:06.666843  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:06.666771  352587 retry.go:31] will retry after 2.227743339s: waiting for machine to come up
	I1002 11:00:08.897227  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:08.897682  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:08.897715  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:08.897616  352587 retry.go:31] will retry after 4.017344897s: waiting for machine to come up
	I1002 11:00:12.919321  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:12.919717  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:00:12.919746  352564 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:00:12.919667  352587 retry.go:31] will retry after 5.247259906s: waiting for machine to come up
	I1002 11:00:18.168566  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.169025  352564 main.go:141] libmachine: (multinode-224116) Found IP for machine: 192.168.39.165
	I1002 11:00:18.169066  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has current primary IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.169083  352564 main.go:141] libmachine: (multinode-224116) Reserving static IP address...
	I1002 11:00:18.169426  352564 main.go:141] libmachine: (multinode-224116) DBG | unable to find host DHCP lease matching {name: "multinode-224116", mac: "52:54:00:85:8e:87", ip: "192.168.39.165"} in network mk-multinode-224116
	I1002 11:00:18.242582  352564 main.go:141] libmachine: (multinode-224116) Reserved static IP address: 192.168.39.165
	I1002 11:00:18.242618  352564 main.go:141] libmachine: (multinode-224116) DBG | Getting to WaitForSSH function...
	I1002 11:00:18.242630  352564 main.go:141] libmachine: (multinode-224116) Waiting for SSH to be available...
	I1002 11:00:18.245246  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.245807  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.245850  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.245877  352564 main.go:141] libmachine: (multinode-224116) DBG | Using SSH client type: external
	I1002 11:00:18.245936  352564 main.go:141] libmachine: (multinode-224116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa (-rw-------)
	I1002 11:00:18.245979  352564 main.go:141] libmachine: (multinode-224116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:00:18.245998  352564 main.go:141] libmachine: (multinode-224116) DBG | About to run SSH command:
	I1002 11:00:18.246009  352564 main.go:141] libmachine: (multinode-224116) DBG | exit 0
	I1002 11:00:18.338292  352564 main.go:141] libmachine: (multinode-224116) DBG | SSH cmd err, output: <nil>: 
	I1002 11:00:18.338566  352564 main.go:141] libmachine: (multinode-224116) KVM machine creation complete!
	I1002 11:00:18.338861  352564 main.go:141] libmachine: (multinode-224116) Calling .GetConfigRaw
	I1002 11:00:18.339453  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:18.339660  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:18.339854  352564 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 11:00:18.339870  352564 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:00:18.341280  352564 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 11:00:18.341295  352564 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 11:00:18.341301  352564 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 11:00:18.341308  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:18.343738  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.344087  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.344114  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.344219  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:18.344396  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.344590  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.344742  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:18.344941  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:18.345330  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:18.345344  352564 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 11:00:18.465669  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:00:18.465702  352564 main.go:141] libmachine: Detecting the provisioner...
	I1002 11:00:18.465713  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:18.468440  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.468792  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.468828  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.468949  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:18.469151  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.469318  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.469512  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:18.469664  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:18.469968  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:18.469980  352564 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 11:00:18.591082  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 11:00:18.591158  352564 main.go:141] libmachine: found compatible host: buildroot
	I1002 11:00:18.591173  352564 main.go:141] libmachine: Provisioning with buildroot...
	I1002 11:00:18.591191  352564 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:00:18.591468  352564 buildroot.go:166] provisioning hostname "multinode-224116"
	I1002 11:00:18.591495  352564 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:00:18.591681  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:18.594396  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.594884  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.594925  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.595058  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:18.595220  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.595378  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.595530  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:18.595701  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:18.596074  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:18.596091  352564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116 && echo "multinode-224116" | sudo tee /etc/hostname
	I1002 11:00:18.731407  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-224116
	
	I1002 11:00:18.731461  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:18.734405  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.734792  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.734828  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.734963  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:18.735232  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.735431  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:18.735612  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:18.735796  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:18.736132  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:18.736151  352564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-224116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-224116/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-224116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:00:18.867406  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:00:18.867445  352564 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:00:18.867516  352564 buildroot.go:174] setting up certificates
	I1002 11:00:18.867529  352564 provision.go:83] configureAuth start
	I1002 11:00:18.867545  352564 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:00:18.867873  352564 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:00:18.870179  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.870471  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.870499  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.870674  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:18.872828  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.873124  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:18.873155  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:18.873273  352564 provision.go:138] copyHostCerts
	I1002 11:00:18.873314  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:00:18.873363  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:00:18.873375  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:00:18.873441  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:00:18.873602  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:00:18.873653  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:00:18.873663  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:00:18.873706  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:00:18.873785  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:00:18.873809  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:00:18.873816  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:00:18.873849  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:00:18.873914  352564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.multinode-224116 san=[192.168.39.165 192.168.39.165 localhost 127.0.0.1 minikube multinode-224116]
	I1002 11:00:19.017847  352564 provision.go:172] copyRemoteCerts
	I1002 11:00:19.017922  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:00:19.017957  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.020893  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.021199  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.021243  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.021415  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.021665  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.021865  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.022056  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:19.111338  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:00:19.111415  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 11:00:19.135043  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:00:19.135111  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:00:19.157904  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:00:19.157970  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:00:19.180218  352564 provision.go:86] duration metric: configureAuth took 312.67165ms
	I1002 11:00:19.180247  352564 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:00:19.180442  352564 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:00:19.180541  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.183499  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.183888  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.183915  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.184121  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.184331  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.184502  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.184654  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.184823  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:19.185145  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:19.185161  352564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:00:19.500295  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:00:19.500322  352564 main.go:141] libmachine: Checking connection to Docker...
	I1002 11:00:19.500334  352564 main.go:141] libmachine: (multinode-224116) Calling .GetURL
	I1002 11:00:19.501780  352564 main.go:141] libmachine: (multinode-224116) DBG | Using libvirt version 6000000
	I1002 11:00:19.504079  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.504487  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.504520  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.504607  352564 main.go:141] libmachine: Docker is up and running!
	I1002 11:00:19.504632  352564 main.go:141] libmachine: Reticulating splines...
	I1002 11:00:19.504642  352564 client.go:171] LocalClient.Create took 24.795918324s
	I1002 11:00:19.504675  352564 start.go:167] duration metric: libmachine.API.Create for "multinode-224116" took 24.795993022s
	I1002 11:00:19.504688  352564 start.go:300] post-start starting for "multinode-224116" (driver="kvm2")
	I1002 11:00:19.504705  352564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:00:19.504733  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:19.505025  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:00:19.505051  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.507912  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.508302  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.508327  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.508510  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.508710  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.508876  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.509030  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:19.599394  352564 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:00:19.603528  352564 command_runner.go:130] > NAME=Buildroot
	I1002 11:00:19.603553  352564 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 11:00:19.603558  352564 command_runner.go:130] > ID=buildroot
	I1002 11:00:19.603564  352564 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 11:00:19.603569  352564 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 11:00:19.603598  352564 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:00:19.603617  352564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:00:19.603692  352564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:00:19.603790  352564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:00:19.603804  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 11:00:19.603918  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:00:19.612150  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:00:19.635295  352564 start.go:303] post-start completed in 130.589939ms
	I1002 11:00:19.635364  352564 main.go:141] libmachine: (multinode-224116) Calling .GetConfigRaw
	I1002 11:00:19.635975  352564 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:00:19.638706  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.639100  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.639125  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.639446  352564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:00:19.639661  352564 start.go:128] duration metric: createHost completed in 24.948719303s
	I1002 11:00:19.639685  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.641665  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.641937  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.641968  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.642101  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.642293  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.642481  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.642634  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.642814  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:00:19.643126  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:00:19.643138  352564 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:00:19.767010  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696244419.750026866
	
	I1002 11:00:19.767044  352564 fix.go:206] guest clock: 1696244419.750026866
	I1002 11:00:19.767055  352564 fix.go:219] Guest: 2023-10-02 11:00:19.750026866 +0000 UTC Remote: 2023-10-02 11:00:19.639673199 +0000 UTC m=+25.052955970 (delta=110.353667ms)
	I1002 11:00:19.767083  352564 fix.go:190] guest clock delta is within tolerance: 110.353667ms
	I1002 11:00:19.767089  352564 start.go:83] releasing machines lock for "multinode-224116", held for 25.076230401s
	I1002 11:00:19.767116  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:19.767455  352564 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:00:19.770099  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.770545  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.770569  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.770729  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:19.771205  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:19.771523  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:19.771634  352564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:00:19.771687  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.771751  352564 ssh_runner.go:195] Run: cat /version.json
	I1002 11:00:19.771778  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:19.774529  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.774558  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.774877  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.774914  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.774940  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:19.774964  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:19.775027  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.775182  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:19.775200  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.775311  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:19.775361  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.775494  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:19.775563  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:19.775646  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:19.859229  352564 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1002 11:00:19.859379  352564 ssh_runner.go:195] Run: systemctl --version
	I1002 11:00:19.887999  352564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 11:00:19.888037  352564 command_runner.go:130] > systemd 247 (247)
	I1002 11:00:19.888049  352564 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1002 11:00:19.888108  352564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:00:20.047945  352564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:00:20.053772  352564 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 11:00:20.053812  352564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:00:20.053868  352564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:00:20.070158  352564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1002 11:00:20.070189  352564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:00:20.070197  352564 start.go:469] detecting cgroup driver to use...
	I1002 11:00:20.070252  352564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:00:20.084359  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:00:20.096956  352564 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:00:20.097027  352564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:00:20.109947  352564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:00:20.123374  352564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:00:20.137360  352564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1002 11:00:20.224976  352564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:00:20.346139  352564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 11:00:20.346181  352564 docker.go:213] disabling docker service ...
	I1002 11:00:20.346237  352564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:00:20.359908  352564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:00:20.372065  352564 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1002 11:00:20.372148  352564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:00:20.386551  352564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 11:00:20.488344  352564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:00:20.602389  352564 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1002 11:00:20.602424  352564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 11:00:20.602555  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:00:20.615787  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:00:20.632768  352564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 11:00:20.633141  352564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:00:20.633200  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:00:20.643847  352564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:00:20.643911  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:00:20.654006  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:00:20.664307  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:00:20.674424  352564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:00:20.685370  352564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:00:20.694484  352564 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:00:20.694524  352564 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:00:20.694564  352564 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:00:20.708202  352564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:00:20.717250  352564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:00:20.828562  352564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:00:20.992759  352564 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:00:20.992839  352564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:00:20.999487  352564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 11:00:20.999509  352564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 11:00:20.999520  352564 command_runner.go:130] > Device: 16h/22d	Inode: 703         Links: 1
	I1002 11:00:20.999529  352564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:00:20.999536  352564 command_runner.go:130] > Access: 2023-10-02 11:00:20.966105006 +0000
	I1002 11:00:20.999545  352564 command_runner.go:130] > Modify: 2023-10-02 11:00:20.966105006 +0000
	I1002 11:00:20.999553  352564 command_runner.go:130] > Change: 2023-10-02 11:00:20.966105006 +0000
	I1002 11:00:20.999564  352564 command_runner.go:130] >  Birth: -
	I1002 11:00:20.999866  352564 start.go:537] Will wait 60s for crictl version
	I1002 11:00:20.999925  352564 ssh_runner.go:195] Run: which crictl
	I1002 11:00:21.003730  352564 command_runner.go:130] > /usr/bin/crictl
	I1002 11:00:21.003938  352564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:00:21.041764  352564 command_runner.go:130] > Version:  0.1.0
	I1002 11:00:21.041791  352564 command_runner.go:130] > RuntimeName:  cri-o
	I1002 11:00:21.041796  352564 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1002 11:00:21.041801  352564 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 11:00:21.045720  352564 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:00:21.045820  352564 ssh_runner.go:195] Run: crio --version
	I1002 11:00:21.086387  352564 command_runner.go:130] > crio version 1.24.1
	I1002 11:00:21.086415  352564 command_runner.go:130] > Version:          1.24.1
	I1002 11:00:21.086429  352564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:00:21.086436  352564 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:00:21.086445  352564 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:00:21.086452  352564 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:00:21.086459  352564 command_runner.go:130] > Compiler:         gc
	I1002 11:00:21.086466  352564 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:00:21.086478  352564 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:00:21.086493  352564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:00:21.086503  352564 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:00:21.086510  352564 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:00:21.087728  352564 ssh_runner.go:195] Run: crio --version
	I1002 11:00:21.138179  352564 command_runner.go:130] > crio version 1.24.1
	I1002 11:00:21.138206  352564 command_runner.go:130] > Version:          1.24.1
	I1002 11:00:21.138216  352564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:00:21.138227  352564 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:00:21.138239  352564 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:00:21.138248  352564 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:00:21.138254  352564 command_runner.go:130] > Compiler:         gc
	I1002 11:00:21.138258  352564 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:00:21.138263  352564 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:00:21.138271  352564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:00:21.138276  352564 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:00:21.138284  352564 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:00:21.140380  352564 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:00:21.141816  352564 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:00:21.144458  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:21.144807  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:21.144842  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:21.144976  352564 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:00:21.148952  352564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:00:21.161397  352564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:00:21.161460  352564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:00:21.192456  352564 command_runner.go:130] > {
	I1002 11:00:21.192482  352564 command_runner.go:130] >   "images": [
	I1002 11:00:21.192486  352564 command_runner.go:130] >   ]
	I1002 11:00:21.192495  352564 command_runner.go:130] > }
	I1002 11:00:21.192637  352564 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:00:21.192731  352564 ssh_runner.go:195] Run: which lz4
	I1002 11:00:21.196530  352564 command_runner.go:130] > /usr/bin/lz4
	I1002 11:00:21.196551  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1002 11:00:21.196620  352564 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:00:21.200566  352564 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:00:21.200811  352564 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:00:21.200829  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:00:22.963996  352564 crio.go:444] Took 1.767392 seconds to copy over tarball
	I1002 11:00:22.964077  352564 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:00:25.718412  352564 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.75430027s)
	I1002 11:00:25.718447  352564 crio.go:451] Took 2.754421 seconds to extract the tarball
	I1002 11:00:25.718460  352564 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:00:25.758583  352564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:00:25.819745  352564 command_runner.go:130] > {
	I1002 11:00:25.819775  352564 command_runner.go:130] >   "images": [
	I1002 11:00:25.819786  352564 command_runner.go:130] >     {
	I1002 11:00:25.819794  352564 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1002 11:00:25.819799  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.819818  352564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 11:00:25.819825  352564 command_runner.go:130] >       ],
	I1002 11:00:25.819832  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.819849  352564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 11:00:25.819863  352564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1002 11:00:25.819870  352564 command_runner.go:130] >       ],
	I1002 11:00:25.819875  352564 command_runner.go:130] >       "size": "65258016",
	I1002 11:00:25.819879  352564 command_runner.go:130] >       "uid": null,
	I1002 11:00:25.819884  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.819892  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.819899  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.819905  352564 command_runner.go:130] >     },
	I1002 11:00:25.819914  352564 command_runner.go:130] >     {
	I1002 11:00:25.819925  352564 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 11:00:25.819940  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.819952  352564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 11:00:25.819961  352564 command_runner.go:130] >       ],
	I1002 11:00:25.819968  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.819976  352564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 11:00:25.819987  352564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 11:00:25.819993  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820009  352564 command_runner.go:130] >       "size": "31470524",
	I1002 11:00:25.820017  352564 command_runner.go:130] >       "uid": null,
	I1002 11:00:25.820025  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820033  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820043  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820052  352564 command_runner.go:130] >     },
	I1002 11:00:25.820058  352564 command_runner.go:130] >     {
	I1002 11:00:25.820072  352564 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1002 11:00:25.820081  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820095  352564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 11:00:25.820104  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820120  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820143  352564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1002 11:00:25.820155  352564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1002 11:00:25.820164  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820174  352564 command_runner.go:130] >       "size": "53621675",
	I1002 11:00:25.820185  352564 command_runner.go:130] >       "uid": null,
	I1002 11:00:25.820195  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820202  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820212  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820221  352564 command_runner.go:130] >     },
	I1002 11:00:25.820230  352564 command_runner.go:130] >     {
	I1002 11:00:25.820241  352564 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1002 11:00:25.820250  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820261  352564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 11:00:25.820272  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820282  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820294  352564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1002 11:00:25.820308  352564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1002 11:00:25.820326  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820334  352564 command_runner.go:130] >       "size": "295456551",
	I1002 11:00:25.820343  352564 command_runner.go:130] >       "uid": {
	I1002 11:00:25.820354  352564 command_runner.go:130] >         "value": "0"
	I1002 11:00:25.820364  352564 command_runner.go:130] >       },
	I1002 11:00:25.820371  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820381  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820391  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820400  352564 command_runner.go:130] >     },
	I1002 11:00:25.820408  352564 command_runner.go:130] >     {
	I1002 11:00:25.820417  352564 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1002 11:00:25.820427  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820439  352564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 11:00:25.820449  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820457  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820472  352564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1002 11:00:25.820492  352564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 11:00:25.820499  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820507  352564 command_runner.go:130] >       "size": "127149008",
	I1002 11:00:25.820516  352564 command_runner.go:130] >       "uid": {
	I1002 11:00:25.820527  352564 command_runner.go:130] >         "value": "0"
	I1002 11:00:25.820536  352564 command_runner.go:130] >       },
	I1002 11:00:25.820544  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820559  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820570  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820578  352564 command_runner.go:130] >     },
	I1002 11:00:25.820586  352564 command_runner.go:130] >     {
	I1002 11:00:25.820592  352564 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1002 11:00:25.820602  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820615  352564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 11:00:25.820625  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820633  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820648  352564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1002 11:00:25.820664  352564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1002 11:00:25.820671  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820676  352564 command_runner.go:130] >       "size": "123171638",
	I1002 11:00:25.820691  352564 command_runner.go:130] >       "uid": {
	I1002 11:00:25.820702  352564 command_runner.go:130] >         "value": "0"
	I1002 11:00:25.820711  352564 command_runner.go:130] >       },
	I1002 11:00:25.820718  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820728  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820738  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820747  352564 command_runner.go:130] >     },
	I1002 11:00:25.820756  352564 command_runner.go:130] >     {
	I1002 11:00:25.820765  352564 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1002 11:00:25.820775  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820787  352564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 11:00:25.820796  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820804  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820818  352564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1002 11:00:25.820834  352564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1002 11:00:25.820842  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820850  352564 command_runner.go:130] >       "size": "74687895",
	I1002 11:00:25.820856  352564 command_runner.go:130] >       "uid": null,
	I1002 11:00:25.820869  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.820879  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.820887  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.820896  352564 command_runner.go:130] >     },
	I1002 11:00:25.820905  352564 command_runner.go:130] >     {
	I1002 11:00:25.820918  352564 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1002 11:00:25.820927  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.820935  352564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 11:00:25.820944  352564 command_runner.go:130] >       ],
	I1002 11:00:25.820954  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.820987  352564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 11:00:25.821004  352564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1002 11:00:25.821010  352564 command_runner.go:130] >       ],
	I1002 11:00:25.821017  352564 command_runner.go:130] >       "size": "61485878",
	I1002 11:00:25.821021  352564 command_runner.go:130] >       "uid": {
	I1002 11:00:25.821025  352564 command_runner.go:130] >         "value": "0"
	I1002 11:00:25.821032  352564 command_runner.go:130] >       },
	I1002 11:00:25.821042  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.821056  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.821066  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.821075  352564 command_runner.go:130] >     },
	I1002 11:00:25.821081  352564 command_runner.go:130] >     {
	I1002 11:00:25.821095  352564 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1002 11:00:25.821103  352564 command_runner.go:130] >       "repoTags": [
	I1002 11:00:25.821111  352564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 11:00:25.821121  352564 command_runner.go:130] >       ],
	I1002 11:00:25.821132  352564 command_runner.go:130] >       "repoDigests": [
	I1002 11:00:25.821147  352564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1002 11:00:25.821161  352564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1002 11:00:25.821171  352564 command_runner.go:130] >       ],
	I1002 11:00:25.821181  352564 command_runner.go:130] >       "size": "750414",
	I1002 11:00:25.821189  352564 command_runner.go:130] >       "uid": {
	I1002 11:00:25.821195  352564 command_runner.go:130] >         "value": "65535"
	I1002 11:00:25.821204  352564 command_runner.go:130] >       },
	I1002 11:00:25.821214  352564 command_runner.go:130] >       "username": "",
	I1002 11:00:25.821224  352564 command_runner.go:130] >       "spec": null,
	I1002 11:00:25.821238  352564 command_runner.go:130] >       "pinned": false
	I1002 11:00:25.821246  352564 command_runner.go:130] >     }
	I1002 11:00:25.821255  352564 command_runner.go:130] >   ]
	I1002 11:00:25.821264  352564 command_runner.go:130] > }
	I1002 11:00:25.821417  352564 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:00:25.821433  352564 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:00:25.821524  352564 ssh_runner.go:195] Run: crio config
	I1002 11:00:25.872043  352564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 11:00:25.872082  352564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 11:00:25.872089  352564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 11:00:25.872093  352564 command_runner.go:130] > #
	I1002 11:00:25.872100  352564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 11:00:25.872107  352564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 11:00:25.872114  352564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 11:00:25.872120  352564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 11:00:25.872124  352564 command_runner.go:130] > # reload'.
	I1002 11:00:25.872131  352564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 11:00:25.872144  352564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 11:00:25.872151  352564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 11:00:25.872206  352564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 11:00:25.872234  352564 command_runner.go:130] > [crio]
	I1002 11:00:25.872245  352564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 11:00:25.872254  352564 command_runner.go:130] > # containers images, in this directory.
	I1002 11:00:25.872261  352564 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1002 11:00:25.872280  352564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 11:00:25.872290  352564 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1002 11:00:25.872300  352564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 11:00:25.872311  352564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 11:00:25.872320  352564 command_runner.go:130] > storage_driver = "overlay"
	I1002 11:00:25.872335  352564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 11:00:25.872347  352564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 11:00:25.872354  352564 command_runner.go:130] > storage_option = [
	I1002 11:00:25.872405  352564 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1002 11:00:25.872417  352564 command_runner.go:130] > ]
	I1002 11:00:25.872442  352564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 11:00:25.872491  352564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 11:00:25.872510  352564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 11:00:25.872528  352564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 11:00:25.872541  352564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 11:00:25.872552  352564 command_runner.go:130] > # always happen on a node reboot
	I1002 11:00:25.872561  352564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 11:00:25.872573  352564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 11:00:25.872587  352564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 11:00:25.872649  352564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 11:00:25.872681  352564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 11:00:25.872696  352564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 11:00:25.872709  352564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 11:00:25.872723  352564 command_runner.go:130] > # internal_wipe = true
	I1002 11:00:25.872734  352564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 11:00:25.872746  352564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 11:00:25.872755  352564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 11:00:25.872801  352564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 11:00:25.872815  352564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 11:00:25.872828  352564 command_runner.go:130] > [crio.api]
	I1002 11:00:25.872837  352564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 11:00:25.872845  352564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 11:00:25.872858  352564 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 11:00:25.872868  352564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 11:00:25.872885  352564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 11:00:25.872896  352564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 11:00:25.872905  352564 command_runner.go:130] > # stream_port = "0"
	I1002 11:00:25.872915  352564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 11:00:25.872925  352564 command_runner.go:130] > # stream_enable_tls = false
	I1002 11:00:25.872934  352564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 11:00:25.872945  352564 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 11:00:25.872959  352564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 11:00:25.872972  352564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 11:00:25.872981  352564 command_runner.go:130] > # minutes.
	I1002 11:00:25.872989  352564 command_runner.go:130] > # stream_tls_cert = ""
	I1002 11:00:25.873000  352564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 11:00:25.873009  352564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 11:00:25.873027  352564 command_runner.go:130] > # stream_tls_key = ""
	I1002 11:00:25.873041  352564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 11:00:25.873055  352564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 11:00:25.873067  352564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 11:00:25.873077  352564 command_runner.go:130] > # stream_tls_ca = ""
	I1002 11:00:25.873093  352564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:00:25.873104  352564 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1002 11:00:25.873118  352564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:00:25.873130  352564 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1002 11:00:25.873160  352564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 11:00:25.873173  352564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 11:00:25.873183  352564 command_runner.go:130] > [crio.runtime]
	I1002 11:00:25.873193  352564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 11:00:25.873203  352564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 11:00:25.873210  352564 command_runner.go:130] > # "nofile=1024:2048"
	I1002 11:00:25.873223  352564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 11:00:25.873234  352564 command_runner.go:130] > # default_ulimits = [
	I1002 11:00:25.873240  352564 command_runner.go:130] > # ]
	I1002 11:00:25.873277  352564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 11:00:25.873292  352564 command_runner.go:130] > # no_pivot = false
	I1002 11:00:25.873303  352564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 11:00:25.873317  352564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 11:00:25.873329  352564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 11:00:25.873342  352564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 11:00:25.873353  352564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 11:00:25.873367  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:00:25.873376  352564 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1002 11:00:25.873381  352564 command_runner.go:130] > # Cgroup setting for conmon
	I1002 11:00:25.873399  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 11:00:25.873411  352564 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 11:00:25.873421  352564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 11:00:25.873433  352564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 11:00:25.873447  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:00:25.873457  352564 command_runner.go:130] > conmon_env = [
	I1002 11:00:25.873467  352564 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1002 11:00:25.873476  352564 command_runner.go:130] > ]
	I1002 11:00:25.873489  352564 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 11:00:25.873501  352564 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 11:00:25.873510  352564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 11:00:25.873535  352564 command_runner.go:130] > # default_env = [
	I1002 11:00:25.873544  352564 command_runner.go:130] > # ]
	I1002 11:00:25.873553  352564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 11:00:25.873564  352564 command_runner.go:130] > # selinux = false
	I1002 11:00:25.873580  352564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 11:00:25.873593  352564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 11:00:25.873605  352564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 11:00:25.873615  352564 command_runner.go:130] > # seccomp_profile = ""
	I1002 11:00:25.873624  352564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 11:00:25.873633  352564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 11:00:25.873644  352564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 11:00:25.873655  352564 command_runner.go:130] > # which might increase security.
	I1002 11:00:25.873667  352564 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1002 11:00:25.873681  352564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 11:00:25.873697  352564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 11:00:25.873714  352564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 11:00:25.873724  352564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 11:00:25.873743  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:00:25.873785  352564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 11:00:25.873797  352564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 11:00:25.873804  352564 command_runner.go:130] > # the cgroup blockio controller.
	I1002 11:00:25.873814  352564 command_runner.go:130] > # blockio_config_file = ""
	I1002 11:00:25.873825  352564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 11:00:25.873836  352564 command_runner.go:130] > # irqbalance daemon.
	I1002 11:00:25.873844  352564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 11:00:25.873857  352564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 11:00:25.873869  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:00:25.873879  352564 command_runner.go:130] > # rdt_config_file = ""
	I1002 11:00:25.873889  352564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 11:00:25.873901  352564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 11:00:25.873914  352564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 11:00:25.873922  352564 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 11:00:25.873936  352564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 11:00:25.873954  352564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 11:00:25.873967  352564 command_runner.go:130] > # will be added.
	I1002 11:00:25.873977  352564 command_runner.go:130] > # default_capabilities = [
	I1002 11:00:25.873986  352564 command_runner.go:130] > # 	"CHOWN",
	I1002 11:00:25.873996  352564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 11:00:25.874003  352564 command_runner.go:130] > # 	"FSETID",
	I1002 11:00:25.874010  352564 command_runner.go:130] > # 	"FOWNER",
	I1002 11:00:25.874019  352564 command_runner.go:130] > # 	"SETGID",
	I1002 11:00:25.874026  352564 command_runner.go:130] > # 	"SETUID",
	I1002 11:00:25.874035  352564 command_runner.go:130] > # 	"SETPCAP",
	I1002 11:00:25.874042  352564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 11:00:25.874052  352564 command_runner.go:130] > # 	"KILL",
	I1002 11:00:25.874058  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874071  352564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 11:00:25.874084  352564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:00:25.874092  352564 command_runner.go:130] > # default_sysctls = [
	I1002 11:00:25.874101  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874112  352564 command_runner.go:130] > # List of devices on the host that a
	I1002 11:00:25.874130  352564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 11:00:25.874141  352564 command_runner.go:130] > # allowed_devices = [
	I1002 11:00:25.874152  352564 command_runner.go:130] > # 	"/dev/fuse",
	I1002 11:00:25.874158  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874169  352564 command_runner.go:130] > # List of additional devices. specified as
	I1002 11:00:25.874186  352564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 11:00:25.874197  352564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 11:00:25.874245  352564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:00:25.874256  352564 command_runner.go:130] > # additional_devices = [
	I1002 11:00:25.874263  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874277  352564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 11:00:25.874287  352564 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 11:00:25.874294  352564 command_runner.go:130] > # 	"/etc/cdi",
	I1002 11:00:25.874303  352564 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 11:00:25.874309  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874322  352564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 11:00:25.874334  352564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 11:00:25.874345  352564 command_runner.go:130] > # Defaults to false.
	I1002 11:00:25.874374  352564 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 11:00:25.874386  352564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 11:00:25.874399  352564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 11:00:25.874409  352564 command_runner.go:130] > # hooks_dir = [
	I1002 11:00:25.874417  352564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 11:00:25.874426  352564 command_runner.go:130] > # ]
	I1002 11:00:25.874436  352564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 11:00:25.874450  352564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 11:00:25.874462  352564 command_runner.go:130] > # its default mounts from the following two files:
	I1002 11:00:25.874469  352564 command_runner.go:130] > #
	I1002 11:00:25.874479  352564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 11:00:25.874495  352564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 11:00:25.874506  352564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 11:00:25.874510  352564 command_runner.go:130] > #
	I1002 11:00:25.874529  352564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 11:00:25.874542  352564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 11:00:25.874555  352564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 11:00:25.874566  352564 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 11:00:25.874579  352564 command_runner.go:130] > #
	I1002 11:00:25.874589  352564 command_runner.go:130] > # default_mounts_file = ""
	I1002 11:00:25.874601  352564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 11:00:25.874612  352564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 11:00:25.874621  352564 command_runner.go:130] > pids_limit = 1024
	I1002 11:00:25.874631  352564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 11:00:25.874650  352564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 11:00:25.874663  352564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 11:00:25.874678  352564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 11:00:25.874688  352564 command_runner.go:130] > # log_size_max = -1
	I1002 11:00:25.874699  352564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 11:00:25.874712  352564 command_runner.go:130] > # log_to_journald = false
	I1002 11:00:25.874724  352564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 11:00:25.874735  352564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 11:00:25.874746  352564 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 11:00:25.874757  352564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 11:00:25.874768  352564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 11:00:25.874778  352564 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 11:00:25.874789  352564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 11:00:25.874799  352564 command_runner.go:130] > # read_only = false
	I1002 11:00:25.874810  352564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 11:00:25.874855  352564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 11:00:25.874865  352564 command_runner.go:130] > # live configuration reload.
	I1002 11:00:25.874871  352564 command_runner.go:130] > # log_level = "info"
	I1002 11:00:25.874881  352564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 11:00:25.874891  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:00:25.874897  352564 command_runner.go:130] > # log_filter = ""
	I1002 11:00:25.874907  352564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 11:00:25.874920  352564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 11:00:25.874930  352564 command_runner.go:130] > # separated by comma.
	I1002 11:00:25.874946  352564 command_runner.go:130] > # uid_mappings = ""
	I1002 11:00:25.874959  352564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 11:00:25.874972  352564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 11:00:25.874982  352564 command_runner.go:130] > # separated by comma.
	I1002 11:00:25.874988  352564 command_runner.go:130] > # gid_mappings = ""
	I1002 11:00:25.875000  352564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 11:00:25.875014  352564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:00:25.875027  352564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:00:25.875037  352564 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 11:00:25.875049  352564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 11:00:25.875062  352564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:00:25.875074  352564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:00:25.875082  352564 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 11:00:25.875094  352564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 11:00:25.875104  352564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 11:00:25.875117  352564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 11:00:25.875128  352564 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 11:00:25.875140  352564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 11:00:25.875153  352564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 11:00:25.875164  352564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 11:00:25.875173  352564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 11:00:25.875180  352564 command_runner.go:130] > drop_infra_ctr = false
	I1002 11:00:25.875186  352564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 11:00:25.875194  352564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 11:00:25.875205  352564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 11:00:25.875211  352564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 11:00:25.875217  352564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 11:00:25.875225  352564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 11:00:25.875233  352564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 11:00:25.875242  352564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 11:00:25.875249  352564 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1002 11:00:25.875255  352564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 11:00:25.875263  352564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 11:00:25.875272  352564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 11:00:25.875276  352564 command_runner.go:130] > # default_runtime = "runc"
	I1002 11:00:25.875284  352564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 11:00:25.875291  352564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 11:00:25.875306  352564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 11:00:25.875313  352564 command_runner.go:130] > # creation as a file is not desired either.
	I1002 11:00:25.875321  352564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 11:00:25.875329  352564 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 11:00:25.875336  352564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 11:00:25.875342  352564 command_runner.go:130] > # ]
	I1002 11:00:25.875351  352564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 11:00:25.875359  352564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 11:00:25.875367  352564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 11:00:25.875374  352564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 11:00:25.875379  352564 command_runner.go:130] > #
	I1002 11:00:25.875384  352564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 11:00:25.875391  352564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 11:00:25.875396  352564 command_runner.go:130] > #  runtime_type = "oci"
	I1002 11:00:25.875403  352564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 11:00:25.875407  352564 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 11:00:25.875414  352564 command_runner.go:130] > #  allowed_annotations = []
	I1002 11:00:25.875418  352564 command_runner.go:130] > # Where:
	I1002 11:00:25.875425  352564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 11:00:25.875434  352564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 11:00:25.875442  352564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 11:00:25.875449  352564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 11:00:25.875455  352564 command_runner.go:130] > #   in $PATH.
	I1002 11:00:25.875463  352564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 11:00:25.875470  352564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 11:00:25.875476  352564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 11:00:25.875482  352564 command_runner.go:130] > #   state.
	I1002 11:00:25.875488  352564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 11:00:25.875525  352564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 11:00:25.875536  352564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 11:00:25.875548  352564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 11:00:25.875558  352564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 11:00:25.875571  352564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 11:00:25.875581  352564 command_runner.go:130] > #   The currently recognized values are:
	I1002 11:00:25.875598  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 11:00:25.875614  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 11:00:25.875628  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 11:00:25.875640  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 11:00:25.875651  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 11:00:25.875664  352564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 11:00:25.875676  352564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 11:00:25.875693  352564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 11:00:25.875704  352564 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 11:00:25.875715  352564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 11:00:25.875726  352564 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1002 11:00:25.875737  352564 command_runner.go:130] > runtime_type = "oci"
	I1002 11:00:25.875747  352564 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 11:00:25.875754  352564 command_runner.go:130] > runtime_config_path = ""
	I1002 11:00:25.875760  352564 command_runner.go:130] > monitor_path = ""
	I1002 11:00:25.875769  352564 command_runner.go:130] > monitor_cgroup = ""
	I1002 11:00:25.875776  352564 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 11:00:25.875790  352564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 11:00:25.875801  352564 command_runner.go:130] > # running containers
	I1002 11:00:25.875807  352564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 11:00:25.875820  352564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 11:00:25.875892  352564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 11:00:25.875905  352564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 11:00:25.875914  352564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 11:00:25.875921  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 11:00:25.875936  352564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 11:00:25.875946  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 11:00:25.875956  352564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 11:00:25.875966  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 11:00:25.875976  352564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 11:00:25.875987  352564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 11:00:25.876004  352564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 11:00:25.876015  352564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 11:00:25.876031  352564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 11:00:25.876040  352564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 11:00:25.876061  352564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 11:00:25.876077  352564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 11:00:25.876094  352564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 11:00:25.876109  352564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 11:00:25.876118  352564 command_runner.go:130] > # Example:
	I1002 11:00:25.876126  352564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 11:00:25.876136  352564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 11:00:25.876145  352564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 11:00:25.876159  352564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 11:00:25.876166  352564 command_runner.go:130] > # cpuset = 0
	I1002 11:00:25.876170  352564 command_runner.go:130] > # cpushares = "0-1"
	I1002 11:00:25.876175  352564 command_runner.go:130] > # Where:
	I1002 11:00:25.876179  352564 command_runner.go:130] > # The workload name is workload-type.
	I1002 11:00:25.876189  352564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 11:00:25.876194  352564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 11:00:25.876202  352564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 11:00:25.876210  352564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 11:00:25.876218  352564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 11:00:25.876222  352564 command_runner.go:130] > # 
	I1002 11:00:25.876228  352564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 11:00:25.876234  352564 command_runner.go:130] > #
	I1002 11:00:25.876239  352564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 11:00:25.876249  352564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 11:00:25.876257  352564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 11:00:25.876264  352564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 11:00:25.876272  352564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 11:00:25.876278  352564 command_runner.go:130] > [crio.image]
	I1002 11:00:25.876293  352564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 11:00:25.876302  352564 command_runner.go:130] > # default_transport = "docker://"
	I1002 11:00:25.876308  352564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 11:00:25.876317  352564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:00:25.876321  352564 command_runner.go:130] > # global_auth_file = ""
	I1002 11:00:25.876329  352564 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 11:00:25.876334  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:00:25.876339  352564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 11:00:25.876345  352564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 11:00:25.876351  352564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:00:25.876356  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:00:25.876362  352564 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 11:00:25.876368  352564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 11:00:25.876374  352564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 11:00:25.876381  352564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 11:00:25.876386  352564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 11:00:25.876390  352564 command_runner.go:130] > # pause_command = "/pause"
	I1002 11:00:25.876400  352564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 11:00:25.876406  352564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 11:00:25.876411  352564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 11:00:25.876417  352564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 11:00:25.876422  352564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 11:00:25.876426  352564 command_runner.go:130] > # signature_policy = ""
	I1002 11:00:25.876431  352564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 11:00:25.876437  352564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 11:00:25.876441  352564 command_runner.go:130] > # changing them here.
	I1002 11:00:25.876444  352564 command_runner.go:130] > # insecure_registries = [
	I1002 11:00:25.876448  352564 command_runner.go:130] > # ]
	I1002 11:00:25.876454  352564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 11:00:25.876458  352564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 11:00:25.876462  352564 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 11:00:25.876467  352564 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 11:00:25.876471  352564 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 11:00:25.876477  352564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 11:00:25.876480  352564 command_runner.go:130] > # CNI plugins.
	I1002 11:00:25.876486  352564 command_runner.go:130] > [crio.network]
	I1002 11:00:25.876491  352564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 11:00:25.876496  352564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 11:00:25.876500  352564 command_runner.go:130] > # cni_default_network = ""
	I1002 11:00:25.876505  352564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 11:00:25.876510  352564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 11:00:25.876516  352564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 11:00:25.876526  352564 command_runner.go:130] > # plugin_dirs = [
	I1002 11:00:25.876530  352564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 11:00:25.876533  352564 command_runner.go:130] > # ]
	I1002 11:00:25.876540  352564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 11:00:25.876544  352564 command_runner.go:130] > [crio.metrics]
	I1002 11:00:25.876549  352564 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 11:00:25.876556  352564 command_runner.go:130] > enable_metrics = true
	I1002 11:00:25.876561  352564 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 11:00:25.876565  352564 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 11:00:25.876574  352564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 11:00:25.876582  352564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 11:00:25.876593  352564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 11:00:25.876597  352564 command_runner.go:130] > # metrics_collectors = [
	I1002 11:00:25.876601  352564 command_runner.go:130] > # 	"operations",
	I1002 11:00:25.876606  352564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 11:00:25.876610  352564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 11:00:25.876614  352564 command_runner.go:130] > # 	"operations_errors",
	I1002 11:00:25.876618  352564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 11:00:25.876622  352564 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 11:00:25.876626  352564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 11:00:25.876630  352564 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 11:00:25.876634  352564 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 11:00:25.876638  352564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 11:00:25.876642  352564 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 11:00:25.876650  352564 command_runner.go:130] > # 	"containers_oom_total",
	I1002 11:00:25.876654  352564 command_runner.go:130] > # 	"containers_oom",
	I1002 11:00:25.876658  352564 command_runner.go:130] > # 	"processes_defunct",
	I1002 11:00:25.876662  352564 command_runner.go:130] > # 	"operations_total",
	I1002 11:00:25.876666  352564 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 11:00:25.876673  352564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 11:00:25.876678  352564 command_runner.go:130] > # 	"operations_errors_total",
	I1002 11:00:25.876682  352564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 11:00:25.876689  352564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 11:00:25.876694  352564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 11:00:25.876698  352564 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 11:00:25.876702  352564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 11:00:25.876709  352564 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 11:00:25.876712  352564 command_runner.go:130] > # ]
	I1002 11:00:25.876717  352564 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 11:00:25.876724  352564 command_runner.go:130] > # metrics_port = 9090
	I1002 11:00:25.876729  352564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 11:00:25.876736  352564 command_runner.go:130] > # metrics_socket = ""
	I1002 11:00:25.876741  352564 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 11:00:25.876748  352564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 11:00:25.876754  352564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 11:00:25.876762  352564 command_runner.go:130] > # certificate on any modification event.
	I1002 11:00:25.876766  352564 command_runner.go:130] > # metrics_cert = ""
	I1002 11:00:25.876774  352564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 11:00:25.876781  352564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 11:00:25.876785  352564 command_runner.go:130] > # metrics_key = ""
	I1002 11:00:25.876793  352564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 11:00:25.876797  352564 command_runner.go:130] > [crio.tracing]
	I1002 11:00:25.876806  352564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 11:00:25.876810  352564 command_runner.go:130] > # enable_tracing = false
	I1002 11:00:25.876820  352564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 11:00:25.876827  352564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 11:00:25.876832  352564 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 11:00:25.876837  352564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 11:00:25.876842  352564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 11:00:25.876849  352564 command_runner.go:130] > [crio.stats]
	I1002 11:00:25.876854  352564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 11:00:25.876862  352564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 11:00:25.876866  352564 command_runner.go:130] > # stats_collection_period = 0
	I1002 11:00:25.876899  352564 command_runner.go:130] ! time="2023-10-02 11:00:25.861752497Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1002 11:00:25.876923  352564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 11:00:25.877035  352564 cni.go:84] Creating CNI manager for ""
	I1002 11:00:25.877050  352564 cni.go:136] 1 nodes found, recommending kindnet
	I1002 11:00:25.877072  352564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:00:25.877096  352564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-224116 NodeName:multinode-224116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:00:25.877226  352564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-224116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:00:25.877311  352564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-224116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:00:25.877370  352564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:00:25.888628  352564 command_runner.go:130] > kubeadm
	I1002 11:00:25.888648  352564 command_runner.go:130] > kubectl
	I1002 11:00:25.888652  352564 command_runner.go:130] > kubelet
	I1002 11:00:25.888673  352564 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:00:25.888734  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:00:25.899479  352564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1002 11:00:25.917316  352564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:00:25.934441  352564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1002 11:00:25.952272  352564 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1002 11:00:25.956237  352564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:00:25.968325  352564 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116 for IP: 192.168.39.165
	I1002 11:00:25.968358  352564 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:25.968541  352564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:00:25.968595  352564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:00:25.968667  352564 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key
	I1002 11:00:25.968687  352564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt with IP's: []
	I1002 11:00:26.337413  352564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt ...
	I1002 11:00:26.337454  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt: {Name:mkc2148f239e21d60574642d9928d5b8ba9744b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.337621  352564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key ...
	I1002 11:00:26.337631  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key: {Name:mkfa6970ce4a77087064e8ef7df053474f16e8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.337699  352564 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key.6c00c800
	I1002 11:00:26.337714  352564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt.6c00c800 with IP's: [192.168.39.165 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 11:00:26.469137  352564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt.6c00c800 ...
	I1002 11:00:26.469167  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt.6c00c800: {Name:mkdda4fba8595436e1a9383779c5948161cbf9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.469318  352564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key.6c00c800 ...
	I1002 11:00:26.469328  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key.6c00c800: {Name:mk68c8a0d4abd64a4c901a6d9785457b3310cf9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.469395  352564 certs.go:337] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt.6c00c800 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt
	I1002 11:00:26.469480  352564 certs.go:341] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key.6c00c800 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key
	I1002 11:00:26.469534  352564 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key
	I1002 11:00:26.469548  352564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt with IP's: []
	I1002 11:00:26.641812  352564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt ...
	I1002 11:00:26.641853  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt: {Name:mkec793665c2272c0726d4ef95160f7f92965032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.642046  352564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key ...
	I1002 11:00:26.642064  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key: {Name:mk1de32bb0cdfaab994d77e80da492165109eee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:26.642161  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 11:00:26.642186  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 11:00:26.642212  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 11:00:26.642232  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 11:00:26.642250  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:00:26.642272  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:00:26.642290  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:00:26.642309  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:00:26.642404  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:00:26.642464  352564 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:00:26.642482  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:00:26.642521  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:00:26.642558  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:00:26.642594  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:00:26.642650  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:00:26.642688  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 11:00:26.642710  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:00:26.642729  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 11:00:26.643307  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:00:26.674334  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:00:26.696801  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:00:26.718600  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:00:26.740512  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:00:26.762823  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:00:26.784766  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:00:26.806708  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:00:26.828676  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:00:26.851308  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:00:26.873783  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:00:26.896227  352564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:00:26.912885  352564 ssh_runner.go:195] Run: openssl version
	I1002 11:00:26.918348  352564 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 11:00:26.918653  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:00:26.929100  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:00:26.933505  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:00:26.933680  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:00:26.933730  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:00:26.938973  352564 command_runner.go:130] > 3ec20f2e
	I1002 11:00:26.939092  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:00:26.950087  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:00:26.961209  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:00:26.965670  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:00:26.965729  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:00:26.965793  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:00:26.971651  352564 command_runner.go:130] > b5213941
	I1002 11:00:26.971734  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:00:26.983787  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:00:26.996705  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:00:27.001383  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:00:27.001447  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:00:27.001512  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:00:27.007345  352564 command_runner.go:130] > 51391683
	I1002 11:00:27.007583  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:00:27.019895  352564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:00:27.024227  352564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:00:27.024269  352564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:00:27.024320  352564 kubeadm.go:404] StartCluster: {Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:00:27.024414  352564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:00:27.024489  352564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:00:27.071667  352564 cri.go:89] found id: ""
	I1002 11:00:27.071753  352564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:00:27.083420  352564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1002 11:00:27.083445  352564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1002 11:00:27.083452  352564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1002 11:00:27.083527  352564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:00:27.094726  352564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:00:27.104493  352564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1002 11:00:27.104540  352564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1002 11:00:27.104553  352564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1002 11:00:27.104572  352564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:00:27.104611  352564 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:00:27.104666  352564 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:00:27.484372  352564 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:00:27.484405  352564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:00:40.034910  352564 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:00:40.034946  352564 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1002 11:00:40.035000  352564 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:00:40.035012  352564 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:00:40.035090  352564 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:00:40.035100  352564 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:00:40.035198  352564 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:00:40.035207  352564 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:00:40.035344  352564 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:00:40.035356  352564 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:00:40.035457  352564 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:00:40.037409  352564 out.go:204]   - Generating certificates and keys ...
	I1002 11:00:40.035501  352564 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:00:40.037525  352564 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:00:40.037557  352564 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 11:00:40.037655  352564 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:00:40.037670  352564 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 11:00:40.037769  352564 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 11:00:40.037780  352564 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 11:00:40.037854  352564 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 11:00:40.037866  352564 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1002 11:00:40.037960  352564 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 11:00:40.037975  352564 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1002 11:00:40.038045  352564 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 11:00:40.038057  352564 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1002 11:00:40.038134  352564 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 11:00:40.038154  352564 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1002 11:00:40.038280  352564 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-224116] and IPs [192.168.39.165 127.0.0.1 ::1]
	I1002 11:00:40.038288  352564 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-224116] and IPs [192.168.39.165 127.0.0.1 ::1]
	I1002 11:00:40.038328  352564 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 11:00:40.038347  352564 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1002 11:00:40.038529  352564 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-224116] and IPs [192.168.39.165 127.0.0.1 ::1]
	I1002 11:00:40.038544  352564 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-224116] and IPs [192.168.39.165 127.0.0.1 ::1]
	I1002 11:00:40.038634  352564 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 11:00:40.038652  352564 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 11:00:40.038747  352564 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 11:00:40.038759  352564 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 11:00:40.038829  352564 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 11:00:40.038837  352564 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1002 11:00:40.038914  352564 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:00:40.038926  352564 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:00:40.039005  352564 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:00:40.039016  352564 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:00:40.039091  352564 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:00:40.039103  352564 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:00:40.039169  352564 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:00:40.039179  352564 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:00:40.039237  352564 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:00:40.039250  352564 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:00:40.039315  352564 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:00:40.039322  352564 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:00:40.039406  352564 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:00:40.041170  352564 out.go:204]   - Booting up control plane ...
	I1002 11:00:40.039502  352564 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:00:40.041250  352564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:00:40.041259  352564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:00:40.041317  352564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:00:40.041324  352564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:00:40.041374  352564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:00:40.041381  352564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:00:40.041524  352564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:00:40.041541  352564 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:00:40.041667  352564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:00:40.041680  352564 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:00:40.041726  352564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 11:00:40.041738  352564 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:00:40.041913  352564 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:00:40.041920  352564 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:00:40.041979  352564 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005569 seconds
	I1002 11:00:40.041985  352564 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005569 seconds
	I1002 11:00:40.042086  352564 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:00:40.042095  352564 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:00:40.042278  352564 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:00:40.042290  352564 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:00:40.042376  352564 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:00:40.042386  352564 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:00:40.042636  352564 command_runner.go:130] > [mark-control-plane] Marking the node multinode-224116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:00:40.042646  352564 kubeadm.go:322] [mark-control-plane] Marking the node multinode-224116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:00:40.042721  352564 command_runner.go:130] > [bootstrap-token] Using token: ilpoyg.v8evcql3r2h36qws
	I1002 11:00:40.042784  352564 kubeadm.go:322] [bootstrap-token] Using token: ilpoyg.v8evcql3r2h36qws
	I1002 11:00:40.044547  352564 out.go:204]   - Configuring RBAC rules ...
	I1002 11:00:40.044704  352564 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:00:40.044719  352564 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:00:40.044805  352564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:00:40.044813  352564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:00:40.045027  352564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:00:40.045031  352564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:00:40.045167  352564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:00:40.045175  352564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:00:40.045316  352564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:00:40.045326  352564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:00:40.045417  352564 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:00:40.045424  352564 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:00:40.045521  352564 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:00:40.045527  352564 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:00:40.045561  352564 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 11:00:40.045567  352564 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:00:40.045607  352564 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 11:00:40.045613  352564 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:00:40.045617  352564 kubeadm.go:322] 
	I1002 11:00:40.045691  352564 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1002 11:00:40.045702  352564 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:00:40.045709  352564 kubeadm.go:322] 
	I1002 11:00:40.045817  352564 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1002 11:00:40.045829  352564 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:00:40.045835  352564 kubeadm.go:322] 
	I1002 11:00:40.045878  352564 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1002 11:00:40.045886  352564 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:00:40.045932  352564 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:00:40.045947  352564 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:00:40.046022  352564 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:00:40.046027  352564 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:00:40.046049  352564 kubeadm.go:322] 
	I1002 11:00:40.046147  352564 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1002 11:00:40.046172  352564 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:00:40.046189  352564 kubeadm.go:322] 
	I1002 11:00:40.046262  352564 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:00:40.046278  352564 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:00:40.046288  352564 kubeadm.go:322] 
	I1002 11:00:40.046387  352564 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1002 11:00:40.046398  352564 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:00:40.046503  352564 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:00:40.046512  352564 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:00:40.046615  352564 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:00:40.046631  352564 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:00:40.046640  352564 kubeadm.go:322] 
	I1002 11:00:40.046762  352564 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:00:40.046771  352564 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:00:40.046881  352564 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1002 11:00:40.046892  352564 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:00:40.046898  352564 kubeadm.go:322] 
	I1002 11:00:40.046986  352564 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ilpoyg.v8evcql3r2h36qws \
	I1002 11:00:40.046996  352564 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ilpoyg.v8evcql3r2h36qws \
	I1002 11:00:40.047109  352564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:00:40.047118  352564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:00:40.047162  352564 command_runner.go:130] > 	--control-plane 
	I1002 11:00:40.047171  352564 kubeadm.go:322] 	--control-plane 
	I1002 11:00:40.047178  352564 kubeadm.go:322] 
	I1002 11:00:40.047276  352564 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:00:40.047287  352564 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:00:40.047293  352564 kubeadm.go:322] 
	I1002 11:00:40.047394  352564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ilpoyg.v8evcql3r2h36qws \
	I1002 11:00:40.047406  352564 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ilpoyg.v8evcql3r2h36qws \
	I1002 11:00:40.047552  352564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:00:40.047573  352564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:00:40.047586  352564 cni.go:84] Creating CNI manager for ""
	I1002 11:00:40.047606  352564 cni.go:136] 1 nodes found, recommending kindnet
	I1002 11:00:40.049414  352564 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 11:00:40.050749  352564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:00:40.073176  352564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 11:00:40.073201  352564 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 11:00:40.073213  352564 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 11:00:40.073224  352564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:00:40.073234  352564 command_runner.go:130] > Access: 2023-10-02 11:00:08.166811444 +0000
	I1002 11:00:40.073243  352564 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 11:00:40.073255  352564 command_runner.go:130] > Change: 2023-10-02 11:00:06.319811444 +0000
	I1002 11:00:40.073262  352564 command_runner.go:130] >  Birth: -
	I1002 11:00:40.073414  352564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:00:40.073432  352564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:00:40.114981  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:00:41.112022  352564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1002 11:00:41.118521  352564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1002 11:00:41.133785  352564 command_runner.go:130] > serviceaccount/kindnet created
	I1002 11:00:41.150641  352564 command_runner.go:130] > daemonset.apps/kindnet created
	I1002 11:00:41.153026  352564 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.038002369s)
	I1002 11:00:41.153074  352564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:00:41.153193  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:41.153210  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=multinode-224116 minikube.k8s.io/updated_at=2023_10_02T11_00_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:41.173412  352564 command_runner.go:130] > -16
	I1002 11:00:41.173633  352564 ops.go:34] apiserver oom_adj: -16
	I1002 11:00:41.320443  352564 command_runner.go:130] > node/multinode-224116 labeled
	I1002 11:00:41.351569  352564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1002 11:00:41.353645  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:41.447101  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:41.448747  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:41.527489  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:42.029805  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:42.118995  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:42.529631  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:42.616444  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:43.029690  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:43.114479  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:43.530131  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:43.615385  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:44.029504  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:44.116871  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:44.529522  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:44.615955  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:45.029717  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:45.108758  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:45.530031  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:45.611564  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:46.029855  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:46.113306  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:46.530039  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:46.618348  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:47.029456  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:47.113914  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:47.529928  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:47.612130  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:48.029361  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:48.111467  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:48.530124  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:48.615292  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:49.029390  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:49.115609  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:49.529159  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:49.615496  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:50.029757  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:50.119770  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:50.529393  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:50.619475  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:51.029637  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:51.124703  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:51.529555  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:51.669909  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:52.029947  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:52.136489  352564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 11:00:52.529886  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:00:52.642649  352564 command_runner.go:130] > NAME      SECRETS   AGE
	I1002 11:00:52.642673  352564 command_runner.go:130] > default   0         0s
	I1002 11:00:52.644213  352564 kubeadm.go:1081] duration metric: took 11.491101966s to wait for elevateKubeSystemPrivileges.
	I1002 11:00:52.644243  352564 kubeadm.go:406] StartCluster complete in 25.619928587s
	I1002 11:00:52.644269  352564 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:52.644389  352564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:00:52.645096  352564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:00:52.645346  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:00:52.645380  352564 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:00:52.645466  352564 addons.go:69] Setting storage-provisioner=true in profile "multinode-224116"
	I1002 11:00:52.645487  352564 addons.go:231] Setting addon storage-provisioner=true in "multinode-224116"
	I1002 11:00:52.645486  352564 addons.go:69] Setting default-storageclass=true in profile "multinode-224116"
	I1002 11:00:52.645509  352564 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-224116"
	I1002 11:00:52.645539  352564 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:00:52.645570  352564 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:00:52.645693  352564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:00:52.645997  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:00:52.646038  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:00:52.646049  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:00:52.646088  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:00:52.646003  352564 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:00:52.646871  352564 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 11:00:52.647404  352564 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:00:52.647423  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:52.647434  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:52.647444  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:52.658411  352564 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1002 11:00:52.658437  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:52.658445  352564 round_trippers.go:580]     Audit-Id: 7678d771-bb5c-4c76-87f0-949cbd841def
	I1002 11:00:52.658450  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:52.658455  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:52.658460  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:52.658465  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:52.658474  352564 round_trippers.go:580]     Content-Length: 291
	I1002 11:00:52.658482  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:52 GMT
	I1002 11:00:52.658517  352564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"344","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1002 11:00:52.658887  352564 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"344","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1002 11:00:52.658955  352564 round_trippers.go:463] PUT https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:00:52.658967  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:52.658976  352564 round_trippers.go:473]     Content-Type: application/json
	I1002 11:00:52.658982  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:52.658991  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:52.661988  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I1002 11:00:52.662169  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I1002 11:00:52.662525  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:00:52.662594  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:00:52.663013  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:00:52.663034  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:00:52.663108  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:00:52.663132  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:00:52.663382  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:00:52.663448  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:00:52.663652  352564 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:00:52.663997  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:00:52.664033  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:00:52.666062  352564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:00:52.666462  352564 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:00:52.666817  352564 addons.go:231] Setting addon default-storageclass=true in "multinode-224116"
	I1002 11:00:52.666862  352564 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:00:52.667329  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:00:52.667368  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:00:52.676901  352564 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1002 11:00:52.676933  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:52.676945  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:52 GMT
	I1002 11:00:52.676955  352564 round_trippers.go:580]     Audit-Id: 11a92427-38b1-40ee-a0f2-6d44605d3f34
	I1002 11:00:52.676963  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:52.676973  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:52.676990  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:52.676999  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:52.677008  352564 round_trippers.go:580]     Content-Length: 291
	I1002 11:00:52.677055  352564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"348","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1002 11:00:52.677242  352564 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:00:52.677260  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:52.677272  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:52.677282  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:52.679074  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I1002 11:00:52.679582  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:00:52.680190  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:00:52.680218  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:00:52.680574  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:00:52.680790  352564 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:00:52.682188  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1002 11:00:52.682630  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:52.682735  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:00:52.684762  352564 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:00:52.683438  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:00:52.686312  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:00:52.686441  352564 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:00:52.686462  352564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:00:52.686485  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:52.686770  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:00:52.687373  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:00:52.687406  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:00:52.689611  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:52.689966  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:52.689999  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:52.690317  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:52.690522  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:52.690731  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:52.690886  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:52.695360  352564 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1002 11:00:52.695386  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:52.695398  352564 round_trippers.go:580]     Audit-Id: f64b10ea-b1f7-4c27-8bf5-e67ee89328c7
	I1002 11:00:52.695407  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:52.695421  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:52.695429  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:52.695441  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:52.695453  352564 round_trippers.go:580]     Content-Length: 291
	I1002 11:00:52.695464  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:52 GMT
	I1002 11:00:52.695502  352564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"348","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1002 11:00:52.695624  352564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-224116" context rescaled to 1 replicas
	I1002 11:00:52.695660  352564 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:00:52.698547  352564 out.go:177] * Verifying Kubernetes components...
	I1002 11:00:52.700717  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:00:52.703155  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I1002 11:00:52.703572  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:00:52.704086  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:00:52.704120  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:00:52.704399  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:00:52.704602  352564 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:00:52.706250  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:00:52.706549  352564 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:00:52.706570  352564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:00:52.706592  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:00:52.709716  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:52.710115  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:00:52.710151  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:00:52.710326  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:00:52.710549  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:00:52.710745  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:00:52.710921  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:00:52.842227  352564 command_runner.go:130] > apiVersion: v1
	I1002 11:00:52.842258  352564 command_runner.go:130] > data:
	I1002 11:00:52.842264  352564 command_runner.go:130] >   Corefile: |
	I1002 11:00:52.842270  352564 command_runner.go:130] >     .:53 {
	I1002 11:00:52.842275  352564 command_runner.go:130] >         errors
	I1002 11:00:52.842281  352564 command_runner.go:130] >         health {
	I1002 11:00:52.842288  352564 command_runner.go:130] >            lameduck 5s
	I1002 11:00:52.842295  352564 command_runner.go:130] >         }
	I1002 11:00:52.842300  352564 command_runner.go:130] >         ready
	I1002 11:00:52.842312  352564 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 11:00:52.842320  352564 command_runner.go:130] >            pods insecure
	I1002 11:00:52.842334  352564 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 11:00:52.842346  352564 command_runner.go:130] >            ttl 30
	I1002 11:00:52.842363  352564 command_runner.go:130] >         }
	I1002 11:00:52.842384  352564 command_runner.go:130] >         prometheus :9153
	I1002 11:00:52.842397  352564 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 11:00:52.842409  352564 command_runner.go:130] >            max_concurrent 1000
	I1002 11:00:52.842420  352564 command_runner.go:130] >         }
	I1002 11:00:52.842429  352564 command_runner.go:130] >         cache 30
	I1002 11:00:52.842439  352564 command_runner.go:130] >         loop
	I1002 11:00:52.842449  352564 command_runner.go:130] >         reload
	I1002 11:00:52.842458  352564 command_runner.go:130] >         loadbalance
	I1002 11:00:52.842466  352564 command_runner.go:130] >     }
	I1002 11:00:52.842474  352564 command_runner.go:130] > kind: ConfigMap
	I1002 11:00:52.842484  352564 command_runner.go:130] > metadata:
	I1002 11:00:52.842497  352564 command_runner.go:130] >   creationTimestamp: "2023-10-02T11:00:39Z"
	I1002 11:00:52.842508  352564 command_runner.go:130] >   name: coredns
	I1002 11:00:52.842519  352564 command_runner.go:130] >   namespace: kube-system
	I1002 11:00:52.842530  352564 command_runner.go:130] >   resourceVersion: "230"
	I1002 11:00:52.842542  352564 command_runner.go:130] >   uid: 97cf364f-a332-48e3-9bc9-5e6bec4b59c1
	I1002 11:00:52.843859  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:00:52.844115  352564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:00:52.844384  352564 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:00:52.844664  352564 node_ready.go:35] waiting up to 6m0s for node "multinode-224116" to be "Ready" ...
	I1002 11:00:52.844762  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:52.844774  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:52.844785  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:52.844796  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:52.854755  352564 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 11:00:52.854783  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:52.854794  352564 round_trippers.go:580]     Audit-Id: ecca3cfb-de3c-49d6-8aa7-8c09868a303f
	I1002 11:00:52.854804  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:52.854812  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:52.854819  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:52.854828  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:52.854842  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:52 GMT
	I1002 11:00:52.854970  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:52.855786  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:52.855802  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:52.855812  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:52.855823  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:52.880772  352564 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1002 11:00:52.880802  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:52.880810  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:52.880815  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:52.880820  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:52.880826  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:52 GMT
	I1002 11:00:52.880833  352564 round_trippers.go:580]     Audit-Id: c4e900de-6e0d-4c96-b1a4-6a1fa9cd87eb
	I1002 11:00:52.880841  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:52.888251  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:52.891009  352564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:00:52.919531  352564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:00:53.388938  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:53.388961  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:53.388971  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:53.388980  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:53.393549  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:00:53.393570  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:53.393577  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:53.393583  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:53 GMT
	I1002 11:00:53.393588  352564 round_trippers.go:580]     Audit-Id: 4926aaf6-631f-49c6-917b-2d43fb764aab
	I1002 11:00:53.393595  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:53.393603  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:53.393614  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:53.394225  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:53.689906  352564 command_runner.go:130] > configmap/coredns replaced
	I1002 11:00:53.689944  352564 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 11:00:53.700533  352564 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1002 11:00:53.713790  352564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1002 11:00:53.736841  352564 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 11:00:53.755954  352564 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 11:00:53.769610  352564 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1002 11:00:53.782607  352564 command_runner.go:130] > pod/storage-provisioner created
	I1002 11:00:53.785288  352564 main.go:141] libmachine: Making call to close driver server
	I1002 11:00:53.785303  352564 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1002 11:00:53.785315  352564 main.go:141] libmachine: (multinode-224116) Calling .Close
	I1002 11:00:53.785338  352564 main.go:141] libmachine: Making call to close driver server
	I1002 11:00:53.785351  352564 main.go:141] libmachine: (multinode-224116) Calling .Close
	I1002 11:00:53.785695  352564 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:00:53.785718  352564 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:00:53.785729  352564 main.go:141] libmachine: Making call to close driver server
	I1002 11:00:53.785737  352564 main.go:141] libmachine: (multinode-224116) Calling .Close
	I1002 11:00:53.785733  352564 main.go:141] libmachine: (multinode-224116) DBG | Closing plugin on server side
	I1002 11:00:53.785775  352564 main.go:141] libmachine: (multinode-224116) DBG | Closing plugin on server side
	I1002 11:00:53.785695  352564 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:00:53.785826  352564 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:00:53.785854  352564 main.go:141] libmachine: Making call to close driver server
	I1002 11:00:53.785868  352564 main.go:141] libmachine: (multinode-224116) Calling .Close
	I1002 11:00:53.785978  352564 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:00:53.785993  352564 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:00:53.786025  352564 main.go:141] libmachine: (multinode-224116) DBG | Closing plugin on server side
	I1002 11:00:53.786095  352564 round_trippers.go:463] GET https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses
	I1002 11:00:53.786102  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:53.786111  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:53.786120  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:53.786259  352564 main.go:141] libmachine: (multinode-224116) DBG | Closing plugin on server side
	I1002 11:00:53.786325  352564 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:00:53.786341  352564 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:00:53.795311  352564 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1002 11:00:53.795333  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:53.795341  352564 round_trippers.go:580]     Audit-Id: 35b9b8ac-a219-4c74-ba11-9a5c804d23ab
	I1002 11:00:53.795347  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:53.795353  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:53.795358  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:53.795364  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:53.795370  352564 round_trippers.go:580]     Content-Length: 1273
	I1002 11:00:53.795376  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:53 GMT
	I1002 11:00:53.795453  352564 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"standard","uid":"01af381d-03fe-415e-8802-b49239daf036","resourceVersion":"359","creationTimestamp":"2023-10-02T11:00:53Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T11:00:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1002 11:00:53.795819  352564 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"01af381d-03fe-415e-8802-b49239daf036","resourceVersion":"359","creationTimestamp":"2023-10-02T11:00:53Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T11:00:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 11:00:53.795864  352564 round_trippers.go:463] PUT https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1002 11:00:53.795868  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:53.795875  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:53.795884  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:53.795890  352564 round_trippers.go:473]     Content-Type: application/json
	I1002 11:00:53.799184  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:53.799198  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:53.799205  352564 round_trippers.go:580]     Audit-Id: 01283a77-3241-4c13-bea1-b3916e026051
	I1002 11:00:53.799210  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:53.799215  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:53.799220  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:53.799226  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:53.799231  352564 round_trippers.go:580]     Content-Length: 1220
	I1002 11:00:53.799236  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:53 GMT
	I1002 11:00:53.799265  352564 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"01af381d-03fe-415e-8802-b49239daf036","resourceVersion":"359","creationTimestamp":"2023-10-02T11:00:53Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T11:00:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 11:00:53.799391  352564 main.go:141] libmachine: Making call to close driver server
	I1002 11:00:53.799405  352564 main.go:141] libmachine: (multinode-224116) Calling .Close
	I1002 11:00:53.799696  352564 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:00:53.799720  352564 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:00:53.799725  352564 main.go:141] libmachine: (multinode-224116) DBG | Closing plugin on server side
	I1002 11:00:53.802740  352564 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 11:00:53.804907  352564 addons.go:502] enable addons completed in 1.159531003s: enabled=[storage-provisioner default-storageclass]
	I1002 11:00:53.888802  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:53.888825  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:53.888833  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:53.888839  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:53.891305  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:53.891320  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:53.891327  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:53.891332  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:53.891337  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:53.891343  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:53.891348  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:53 GMT
	I1002 11:00:53.891356  352564 round_trippers.go:580]     Audit-Id: 9d39a82e-92c2-425b-b674-8bd80cb3c083
	I1002 11:00:53.891595  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:54.389255  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:54.389281  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:54.389297  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:54.389307  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:54.392090  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:54.392110  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:54.392117  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:54.392123  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:54.392129  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:54 GMT
	I1002 11:00:54.392134  352564 round_trippers.go:580]     Audit-Id: c0913b05-3354-4844-af97-f545520fbb1e
	I1002 11:00:54.392139  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:54.392144  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:54.392866  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:54.889659  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:54.889686  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:54.889699  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:54.889709  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:54.893944  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:00:54.893972  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:54.893983  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:54.893993  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:54.894001  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:54.894009  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:54.894017  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:54 GMT
	I1002 11:00:54.894024  352564 round_trippers.go:580]     Audit-Id: 18d2568a-f564-4446-8c1f-914b0eaac7d7
	I1002 11:00:54.895532  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:54.895969  352564 node_ready.go:58] node "multinode-224116" has status "Ready":"False"
	I1002 11:00:55.389176  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:55.389203  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:55.389214  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:55.389222  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:55.391921  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:55.391949  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:55.391959  352564 round_trippers.go:580]     Audit-Id: f81f7272-9c6a-426a-b85a-ee144af16813
	I1002 11:00:55.391968  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:55.391981  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:55.391989  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:55.392087  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:55.392106  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:55 GMT
	I1002 11:00:55.392453  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:55.889181  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:55.889213  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:55.889226  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:55.889235  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:55.893742  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:00:55.893769  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:55.893777  352564 round_trippers.go:580]     Audit-Id: fa282d23-f838-4509-9f3c-125c1db7a61b
	I1002 11:00:55.893785  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:55.893793  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:55.893801  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:55.893811  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:55.893820  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:55 GMT
	I1002 11:00:55.894099  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:56.389837  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:56.389863  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:56.389874  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:56.389882  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:56.392948  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:56.392977  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:56.392987  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:56 GMT
	I1002 11:00:56.392995  352564 round_trippers.go:580]     Audit-Id: 30b5626f-3a97-478d-ad64-ba5528553840
	I1002 11:00:56.393004  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:56.393019  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:56.393032  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:56.393043  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:56.393427  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:56.889104  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:56.889128  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:56.889137  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:56.889143  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:56.892298  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:56.892332  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:56.892341  352564 round_trippers.go:580]     Audit-Id: 5d3f2b76-f5e3-464e-9b3f-b37e388b8857
	I1002 11:00:56.892350  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:56.892357  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:56.892364  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:56.892372  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:56.892381  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:56 GMT
	I1002 11:00:56.892612  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:57.389206  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:57.389234  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:57.389249  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:57.389259  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:57.392032  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:57.392057  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:57.392064  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:57.392070  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:57.392076  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:57 GMT
	I1002 11:00:57.392081  352564 round_trippers.go:580]     Audit-Id: 74e66a47-1f30-46e1-8e2d-8562dd57d5b0
	I1002 11:00:57.392087  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:57.392099  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:57.392266  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:57.392645  352564 node_ready.go:58] node "multinode-224116" has status "Ready":"False"
	I1002 11:00:57.889638  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:57.889664  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:57.889674  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:57.889681  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:57.893533  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:57.893561  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:57.893572  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:57.893581  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:57 GMT
	I1002 11:00:57.893589  352564 round_trippers.go:580]     Audit-Id: 7108ffee-cfd3-40ef-b337-2a8a7916f131
	I1002 11:00:57.893604  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:57.893613  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:57.893624  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:57.894187  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"331","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1002 11:00:58.388850  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:58.388888  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.388896  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.388903  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.395101  352564 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 11:00:58.395130  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.395142  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.395151  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.395160  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.395168  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.395181  352564 round_trippers.go:580]     Audit-Id: 4d55af5e-07c2-4806-a5f4-d71874ff3213
	I1002 11:00:58.395190  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.395308  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:00:58.395815  352564 node_ready.go:49] node "multinode-224116" has status "Ready":"True"
	I1002 11:00:58.395839  352564 node_ready.go:38] duration metric: took 5.551157712s waiting for node "multinode-224116" to be "Ready" ...
	I1002 11:00:58.395856  352564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:00:58.396039  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:00:58.396060  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.396068  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.396076  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.401406  352564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 11:00:58.401425  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.401432  352564 round_trippers.go:580]     Audit-Id: 097a1ad2-6079-4627-ab72-243717613d15
	I1002 11:00:58.401438  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.401445  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.401451  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.401456  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.401462  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.406838  352564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"392"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54368 chars]
	I1002 11:00:58.412088  352564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:00:58.412187  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:00:58.412198  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.412210  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.412223  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.424815  352564 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1002 11:00:58.424842  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.424852  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.424861  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.424869  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.424878  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.424886  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.424895  352564 round_trippers.go:580]     Audit-Id: 5c92877d-9215-4213-9f7e-69c024909599
	I1002 11:00:58.425570  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1002 11:00:58.426010  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:58.426028  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.426038  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.426046  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.428372  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:58.428389  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.428396  352564 round_trippers.go:580]     Audit-Id: 1a3b32ee-cf21-4bba-a0ef-d3efe7059d90
	I1002 11:00:58.428402  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.428407  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.428412  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.428418  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.428426  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.428660  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:00:58.429097  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:00:58.429112  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.429119  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.429125  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.432314  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:58.432332  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.432341  352564 round_trippers.go:580]     Audit-Id: de32e9fc-7f82-402d-8230-90621675f0a1
	I1002 11:00:58.432349  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.432357  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.432365  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.432378  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.432387  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.432644  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1002 11:00:58.433094  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:58.433110  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.433120  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.433131  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.435336  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:58.435353  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.435364  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.435374  352564 round_trippers.go:580]     Audit-Id: df667d9f-6ddd-422c-9151-f56730ffea7a
	I1002 11:00:58.435382  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.435397  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.435403  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.435408  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.435595  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:00:58.936750  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:00:58.936783  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.936796  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.936806  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.942524  352564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 11:00:58.942547  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.942560  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.942568  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.942575  352564 round_trippers.go:580]     Audit-Id: 2091b3c1-1543-40d1-a219-fce472133ca3
	I1002 11:00:58.942582  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.942591  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.942599  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.942764  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1002 11:00:58.943361  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:58.943377  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:58.943390  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:58.943401  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:58.946410  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:58.946431  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:58.946442  352564 round_trippers.go:580]     Audit-Id: ca0fbd15-85d6-4e94-845d-177f668f9c1c
	I1002 11:00:58.946450  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:58.946461  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:58.946476  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:58.946491  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:58.946499  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:58 GMT
	I1002 11:00:58.946654  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:00:59.436285  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:00:59.436315  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:59.436329  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:59.436339  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:59.440699  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:00:59.440727  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:59.440737  352564 round_trippers.go:580]     Audit-Id: 676cd63d-24ed-4c6a-bc40-c363d0c933ae
	I1002 11:00:59.440746  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:59.440755  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:59.440763  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:59.440771  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:59.440778  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:59 GMT
	I1002 11:00:59.441006  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1002 11:00:59.441491  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:59.441508  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:59.441515  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:59.441523  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:59.444056  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:59.444078  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:59.444088  352564 round_trippers.go:580]     Audit-Id: 3325f3f8-ee95-4c9d-97e4-0231df1b99c2
	I1002 11:00:59.444096  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:59.444107  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:59.444114  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:59.444123  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:59.444131  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:59 GMT
	I1002 11:00:59.444281  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:00:59.936476  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:00:59.936501  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:59.936509  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:59.936515  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:59.939649  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:00:59.939677  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:59.939690  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:59.939699  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:59.939715  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:59 GMT
	I1002 11:00:59.939723  352564 round_trippers.go:580]     Audit-Id: c78c8c6b-38c5-44aa-b952-37811c250f07
	I1002 11:00:59.939732  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:59.939741  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:59.940398  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"388","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1002 11:00:59.941024  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:00:59.941045  352564 round_trippers.go:469] Request Headers:
	I1002 11:00:59.941056  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:00:59.941066  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:00:59.943207  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:00:59.943228  352564 round_trippers.go:577] Response Headers:
	I1002 11:00:59.943237  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:00:59.943245  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:00:59.943253  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:00:59.943261  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:00:59.943269  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:00:59 GMT
	I1002 11:00:59.943282  352564 round_trippers.go:580]     Audit-Id: 27991f9e-5d49-41b5-823f-05671a023ac3
	I1002 11:00:59.943465  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.436158  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:01:00.436185  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.436193  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.436199  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.438867  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.438895  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.438906  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.438914  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.438923  352564 round_trippers.go:580]     Audit-Id: 94439834-1559-4ba9-a60d-1d0fab1e7a9b
	I1002 11:01:00.438931  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.438941  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.438949  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.439130  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"407","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1002 11:01:00.439652  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.439667  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.439675  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.439681  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.441637  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:00.441653  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.441660  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.441665  352564 round_trippers.go:580]     Audit-Id: 6b470657-efc5-400b-a943-98d16cfbeea8
	I1002 11:01:00.441670  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.441675  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.441681  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.441689  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.442078  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.442466  352564 pod_ready.go:92] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:00.442490  352564 pod_ready.go:81] duration metric: took 2.03037738s waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.442507  352564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.442581  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:01:00.442590  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.442597  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.442603  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.444606  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:00.444621  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.444632  352564 round_trippers.go:580]     Audit-Id: 3575216a-3d1c-40b1-9f7e-4c88e01e98d4
	I1002 11:01:00.444637  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.444643  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.444649  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.444657  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.444665  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.444984  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"402","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1002 11:01:00.445453  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.445469  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.445476  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.445483  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.447581  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.447601  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.447607  352564 round_trippers.go:580]     Audit-Id: 62433ddf-84a1-4efa-9f1a-00f1e1c45c87
	I1002 11:01:00.447612  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.447617  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.447627  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.447632  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.447640  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.447914  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.448267  352564 pod_ready.go:92] pod "etcd-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:00.448284  352564 pod_ready.go:81] duration metric: took 5.764496ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.448295  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.448348  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:01:00.448355  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.448362  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.448368  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.450735  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.450756  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.450765  352564 round_trippers.go:580]     Audit-Id: 0666d3df-932e-4d40-91c5-1de8d7d09b30
	I1002 11:01:00.450770  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.450776  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.450782  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.450790  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.450798  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.450962  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"302","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1002 11:01:00.451451  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.451466  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.451473  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.451479  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.453156  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:00.453173  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.453183  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.453191  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.453200  352564 round_trippers.go:580]     Audit-Id: 9c6982e5-18d3-4ca6-8d18-8a903243476d
	I1002 11:01:00.453212  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.453221  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.453231  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.453421  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.453760  352564 pod_ready.go:92] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:00.453778  352564 pod_ready.go:81] duration metric: took 5.477423ms waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.453788  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.453834  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:01:00.453842  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.453849  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.453855  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.455977  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.455995  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.456004  352564 round_trippers.go:580]     Audit-Id: 53b7127e-7fb5-401e-a276-99744348f05b
	I1002 11:01:00.456012  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.456020  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.456030  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.456039  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.456053  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.456367  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"403","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1002 11:01:00.456816  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.456831  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.456838  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.456844  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.458662  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:00.458680  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.458686  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.458691  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.458696  352564 round_trippers.go:580]     Audit-Id: 0b3f838f-2e05-4b87-969b-34d76af041d1
	I1002 11:01:00.458701  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.458706  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.458711  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.458972  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.459229  352564 pod_ready.go:92] pod "kube-controller-manager-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:00.459242  352564 pod_ready.go:81] duration metric: took 5.447021ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.459253  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.589670  352564 request.go:629] Waited for 130.335645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:01:00.589750  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:01:00.589755  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.589763  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.589769  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.593217  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:00.593238  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.593245  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.593253  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.593261  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.593269  352564 round_trippers.go:580]     Audit-Id: 5364078f-23af-42c4-b2ae-9050c5318fe9
	I1002 11:01:00.593277  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.593284  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.593509  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"375","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:01:00.789423  352564 request.go:629] Waited for 195.391844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.789513  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:00.789519  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.789532  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.789542  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.792360  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.792384  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.792390  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.792396  352564 round_trippers.go:580]     Audit-Id: ec3789b3-80c1-4d53-8ba2-be83a76ced63
	I1002 11:01:00.792402  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.792407  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.792412  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.792417  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.792593  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:00.792980  352564 pod_ready.go:92] pod "kube-proxy-nshcj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:00.792996  352564 pod_ready.go:81] duration metric: took 333.738211ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.793006  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:00.989474  352564 request.go:629] Waited for 196.399512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:01:00.989552  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:01:00.989557  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:00.989576  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:00.989584  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:00.992170  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:00.992190  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:00.992197  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:00.992202  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:00.992207  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:00 GMT
	I1002 11:01:00.992212  352564 round_trippers.go:580]     Audit-Id: 069903f1-0948-4cf7-a7f2-12cbef9760c8
	I1002 11:01:00.992217  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:00.992222  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:00.992573  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"361","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1002 11:01:01.189333  352564 request.go:629] Waited for 196.350192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:01.189399  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:01.189404  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.189412  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.189424  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.194098  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:01:01.194126  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.194136  352564 round_trippers.go:580]     Audit-Id: dd96382c-1cdb-417d-8e86-37be33dabf31
	I1002 11:01:01.194144  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.194153  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.194161  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.194170  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.194178  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.194287  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:01.194739  352564 pod_ready.go:92] pod "kube-scheduler-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:01.194762  352564 pod_ready.go:81] duration metric: took 401.749269ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:01.194776  352564 pod_ready.go:38] duration metric: took 2.798897618s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:01:01.194795  352564 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:01:01.194882  352564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:01:01.207699  352564 command_runner.go:130] > 1091
	I1002 11:01:01.207795  352564 api_server.go:72] duration metric: took 8.512097346s to wait for apiserver process to appear ...
	I1002 11:01:01.207825  352564 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:01:01.207845  352564 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:01:01.212669  352564 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1002 11:01:01.212759  352564 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I1002 11:01:01.212771  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.212782  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.212795  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.214214  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:01.214236  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.214246  352564 round_trippers.go:580]     Content-Length: 263
	I1002 11:01:01.214255  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.214263  352564 round_trippers.go:580]     Audit-Id: 4b7fd6ca-241c-4e45-8603-002e5a9c2868
	I1002 11:01:01.214270  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.214278  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.214290  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.214301  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.214328  352564 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1002 11:01:01.214448  352564 api_server.go:141] control plane version: v1.28.2
	I1002 11:01:01.214471  352564 api_server.go:131] duration metric: took 6.638018ms to wait for apiserver health ...
	I1002 11:01:01.214481  352564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:01:01.389940  352564 request.go:629] Waited for 175.359884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:01:01.390023  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:01:01.390031  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.390043  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.390054  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.393848  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:01.393877  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.393888  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.393896  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.393913  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.393922  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.393931  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.393943  352564 round_trippers.go:580]     Audit-Id: 4f996f5d-1f86-4c00-abde-241e364ece41
	I1002 11:01:01.395006  352564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"407","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1002 11:01:01.396674  352564 system_pods.go:59] 8 kube-system pods found
	I1002 11:01:01.396693  352564 system_pods.go:61] "coredns-5dd5756b68-h6gbq" [49ee2f4a-1c73-4642-bd3b-678e6cb9ef55] Running
	I1002 11:01:01.396698  352564 system_pods.go:61] "etcd-multinode-224116" [5accde9f-e62c-422f-aaa1-ddf4f8f0da05] Running
	I1002 11:01:01.396702  352564 system_pods.go:61] "kindnet-f7m28" [dc1438f0-bd67-457d-9e7e-b8998a01b029] Running
	I1002 11:01:01.396707  352564 system_pods.go:61] "kube-apiserver-multinode-224116" [26841310-e8b5-409e-8915-888db5e257ab] Running
	I1002 11:01:01.396711  352564 system_pods.go:61] "kube-controller-manager-multinode-224116" [7d71d06a-a323-41ce-a7a4-c7d33880f9fa] Running
	I1002 11:01:01.396718  352564 system_pods.go:61] "kube-proxy-nshcj" [f3def928-5e43-4f7e-8ae2-3c0daafd0003] Running
	I1002 11:01:01.396722  352564 system_pods.go:61] "kube-scheduler-multinode-224116" [66f95d23-f489-423f-9008-a7cf03a9ee55] Running
	I1002 11:01:01.396729  352564 system_pods.go:61] "storage-provisioner" [ea5da043-58ea-4918-836d-19655c55b885] Running
	I1002 11:01:01.396735  352564 system_pods.go:74] duration metric: took 182.245021ms to wait for pod list to return data ...
	I1002 11:01:01.396742  352564 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:01:01.589277  352564 request.go:629] Waited for 192.431898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I1002 11:01:01.589368  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I1002 11:01:01.589382  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.589394  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.589404  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.592415  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:01.592454  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.592462  352564 round_trippers.go:580]     Audit-Id: e2082ff5-75ba-475f-b8a6-7d7f2083fe1b
	I1002 11:01:01.592468  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.592473  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.592479  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.592484  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.592490  352564 round_trippers.go:580]     Content-Length: 261
	I1002 11:01:01.592496  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.592525  352564 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1d1f48a9-6a1e-4e03-8f78-cde5f832a3a7","resourceVersion":"304","creationTimestamp":"2023-10-02T11:00:52Z"}}]}
	I1002 11:01:01.592736  352564 default_sa.go:45] found service account: "default"
	I1002 11:01:01.592752  352564 default_sa.go:55] duration metric: took 196.003303ms for default service account to be created ...
	I1002 11:01:01.592760  352564 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:01:01.789202  352564 request.go:629] Waited for 196.356683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:01:01.789272  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:01:01.789277  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.789286  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.789292  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.793316  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:01.793352  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.793364  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.793374  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.793393  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.793403  352564 round_trippers.go:580]     Audit-Id: 039d46d3-be3f-495f-b704-0aeaf0547ff3
	I1002 11:01:01.793412  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.793428  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.794809  352564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"407","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1002 11:01:01.796495  352564 system_pods.go:86] 8 kube-system pods found
	I1002 11:01:01.796516  352564 system_pods.go:89] "coredns-5dd5756b68-h6gbq" [49ee2f4a-1c73-4642-bd3b-678e6cb9ef55] Running
	I1002 11:01:01.796521  352564 system_pods.go:89] "etcd-multinode-224116" [5accde9f-e62c-422f-aaa1-ddf4f8f0da05] Running
	I1002 11:01:01.796525  352564 system_pods.go:89] "kindnet-f7m28" [dc1438f0-bd67-457d-9e7e-b8998a01b029] Running
	I1002 11:01:01.796530  352564 system_pods.go:89] "kube-apiserver-multinode-224116" [26841310-e8b5-409e-8915-888db5e257ab] Running
	I1002 11:01:01.796537  352564 system_pods.go:89] "kube-controller-manager-multinode-224116" [7d71d06a-a323-41ce-a7a4-c7d33880f9fa] Running
	I1002 11:01:01.796541  352564 system_pods.go:89] "kube-proxy-nshcj" [f3def928-5e43-4f7e-8ae2-3c0daafd0003] Running
	I1002 11:01:01.796545  352564 system_pods.go:89] "kube-scheduler-multinode-224116" [66f95d23-f489-423f-9008-a7cf03a9ee55] Running
	I1002 11:01:01.796549  352564 system_pods.go:89] "storage-provisioner" [ea5da043-58ea-4918-836d-19655c55b885] Running
	I1002 11:01:01.796558  352564 system_pods.go:126] duration metric: took 203.792011ms to wait for k8s-apps to be running ...
	I1002 11:01:01.796566  352564 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:01:01.796611  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:01:01.810979  352564 system_svc.go:56] duration metric: took 14.397933ms WaitForService to wait for kubelet.
	I1002 11:01:01.811016  352564 kubeadm.go:581] duration metric: took 9.115324787s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:01:01.811052  352564 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:01:01.989549  352564 request.go:629] Waited for 178.390268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I1002 11:01:01.989615  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:01:01.989620  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:01.989628  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:01.989635  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:01.992873  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:01.992903  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:01.992913  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:01.992921  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:01.992934  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:01.992942  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:01 GMT
	I1002 11:01:01.992949  352564 round_trippers.go:580]     Audit-Id: 6e4e25c4-3912-4a1f-a563-964cdcaaf564
	I1002 11:01:01.992957  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:01.993368  352564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1002 11:01:01.993886  352564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:01:01.993918  352564 node_conditions.go:123] node cpu capacity is 2
	I1002 11:01:01.993940  352564 node_conditions.go:105] duration metric: took 182.876878ms to run NodePressure ...
	I1002 11:01:01.993961  352564 start.go:228] waiting for startup goroutines ...
	I1002 11:01:01.993975  352564 start.go:233] waiting for cluster config update ...
	I1002 11:01:01.993988  352564 start.go:242] writing updated cluster config ...
	I1002 11:01:01.996556  352564 out.go:177] 
	I1002 11:01:01.998483  352564 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:01:01.998569  352564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:01:02.000573  352564 out.go:177] * Starting worker node multinode-224116-m02 in cluster multinode-224116
	I1002 11:01:02.002193  352564 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:01:02.002223  352564 cache.go:57] Caching tarball of preloaded images
	I1002 11:01:02.002341  352564 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:01:02.002373  352564 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:01:02.002490  352564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:01:02.002672  352564 start.go:365] acquiring machines lock for multinode-224116-m02: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:01:02.002722  352564 start.go:369] acquired machines lock for "multinode-224116-m02" in 30.064µs
	I1002 11:01:02.002740  352564 start.go:93] Provisioning new machine with config: &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:01:02.002812  352564 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1002 11:01:02.004884  352564 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 11:01:02.004973  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:01:02.005004  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:01:02.019481  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I1002 11:01:02.019932  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:01:02.020388  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:01:02.020412  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:01:02.020755  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:01:02.020992  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:01:02.021170  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:02.021328  352564 start.go:159] libmachine.API.Create for "multinode-224116" (driver="kvm2")
	I1002 11:01:02.021360  352564 client.go:168] LocalClient.Create starting
	I1002 11:01:02.021392  352564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 11:01:02.021425  352564 main.go:141] libmachine: Decoding PEM data...
	I1002 11:01:02.021442  352564 main.go:141] libmachine: Parsing certificate...
	I1002 11:01:02.021513  352564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 11:01:02.021532  352564 main.go:141] libmachine: Decoding PEM data...
	I1002 11:01:02.021544  352564 main.go:141] libmachine: Parsing certificate...
	I1002 11:01:02.021561  352564 main.go:141] libmachine: Running pre-create checks...
	I1002 11:01:02.021570  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .PreCreateCheck
	I1002 11:01:02.021785  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetConfigRaw
	I1002 11:01:02.022185  352564 main.go:141] libmachine: Creating machine...
	I1002 11:01:02.022204  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .Create
	I1002 11:01:02.022334  352564 main.go:141] libmachine: (multinode-224116-m02) Creating KVM machine...
	I1002 11:01:02.023493  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found existing default KVM network
	I1002 11:01:02.023733  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found existing private KVM network mk-multinode-224116
	I1002 11:01:02.023870  352564 main.go:141] libmachine: (multinode-224116-m02) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02 ...
	I1002 11:01:02.023901  352564 main.go:141] libmachine: (multinode-224116-m02) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 11:01:02.023932  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:02.023829  352927 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:01:02.024050  352564 main.go:141] libmachine: (multinode-224116-m02) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 11:01:02.258837  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:02.258673  352927 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa...
	I1002 11:01:02.427926  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:02.427268  352927 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/multinode-224116-m02.rawdisk...
	I1002 11:01:02.427962  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Writing magic tar header
	I1002 11:01:02.427986  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Writing SSH key tar header
	I1002 11:01:02.428001  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:02.427944  352927 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02 ...
	I1002 11:01:02.428140  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02
	I1002 11:01:02.428174  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02 (perms=drwx------)
	I1002 11:01:02.428190  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 11:01:02.428203  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 11:01:02.428221  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 11:01:02.428238  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 11:01:02.428270  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 11:01:02.428303  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:01:02.428314  352564 main.go:141] libmachine: (multinode-224116-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 11:01:02.428330  352564 main.go:141] libmachine: (multinode-224116-m02) Creating domain...
	I1002 11:01:02.428342  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 11:01:02.428351  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 11:01:02.428365  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home/jenkins
	I1002 11:01:02.428377  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Checking permissions on dir: /home
	I1002 11:01:02.428388  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Skipping /home - not owner
	I1002 11:01:02.429344  352564 main.go:141] libmachine: (multinode-224116-m02) define libvirt domain using xml: 
	I1002 11:01:02.429373  352564 main.go:141] libmachine: (multinode-224116-m02) <domain type='kvm'>
	I1002 11:01:02.429384  352564 main.go:141] libmachine: (multinode-224116-m02)   <name>multinode-224116-m02</name>
	I1002 11:01:02.429395  352564 main.go:141] libmachine: (multinode-224116-m02)   <memory unit='MiB'>2200</memory>
	I1002 11:01:02.429410  352564 main.go:141] libmachine: (multinode-224116-m02)   <vcpu>2</vcpu>
	I1002 11:01:02.429422  352564 main.go:141] libmachine: (multinode-224116-m02)   <features>
	I1002 11:01:02.429433  352564 main.go:141] libmachine: (multinode-224116-m02)     <acpi/>
	I1002 11:01:02.429446  352564 main.go:141] libmachine: (multinode-224116-m02)     <apic/>
	I1002 11:01:02.429461  352564 main.go:141] libmachine: (multinode-224116-m02)     <pae/>
	I1002 11:01:02.429504  352564 main.go:141] libmachine: (multinode-224116-m02)     
	I1002 11:01:02.429518  352564 main.go:141] libmachine: (multinode-224116-m02)   </features>
	I1002 11:01:02.429527  352564 main.go:141] libmachine: (multinode-224116-m02)   <cpu mode='host-passthrough'>
	I1002 11:01:02.429537  352564 main.go:141] libmachine: (multinode-224116-m02)   
	I1002 11:01:02.429550  352564 main.go:141] libmachine: (multinode-224116-m02)   </cpu>
	I1002 11:01:02.429565  352564 main.go:141] libmachine: (multinode-224116-m02)   <os>
	I1002 11:01:02.429582  352564 main.go:141] libmachine: (multinode-224116-m02)     <type>hvm</type>
	I1002 11:01:02.429598  352564 main.go:141] libmachine: (multinode-224116-m02)     <boot dev='cdrom'/>
	I1002 11:01:02.429612  352564 main.go:141] libmachine: (multinode-224116-m02)     <boot dev='hd'/>
	I1002 11:01:02.429627  352564 main.go:141] libmachine: (multinode-224116-m02)     <bootmenu enable='no'/>
	I1002 11:01:02.429639  352564 main.go:141] libmachine: (multinode-224116-m02)   </os>
	I1002 11:01:02.429670  352564 main.go:141] libmachine: (multinode-224116-m02)   <devices>
	I1002 11:01:02.429700  352564 main.go:141] libmachine: (multinode-224116-m02)     <disk type='file' device='cdrom'>
	I1002 11:01:02.429731  352564 main.go:141] libmachine: (multinode-224116-m02)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/boot2docker.iso'/>
	I1002 11:01:02.429753  352564 main.go:141] libmachine: (multinode-224116-m02)       <target dev='hdc' bus='scsi'/>
	I1002 11:01:02.429768  352564 main.go:141] libmachine: (multinode-224116-m02)       <readonly/>
	I1002 11:01:02.429778  352564 main.go:141] libmachine: (multinode-224116-m02)     </disk>
	I1002 11:01:02.429785  352564 main.go:141] libmachine: (multinode-224116-m02)     <disk type='file' device='disk'>
	I1002 11:01:02.429795  352564 main.go:141] libmachine: (multinode-224116-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 11:01:02.429807  352564 main.go:141] libmachine: (multinode-224116-m02)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/multinode-224116-m02.rawdisk'/>
	I1002 11:01:02.429814  352564 main.go:141] libmachine: (multinode-224116-m02)       <target dev='hda' bus='virtio'/>
	I1002 11:01:02.429821  352564 main.go:141] libmachine: (multinode-224116-m02)     </disk>
	I1002 11:01:02.429829  352564 main.go:141] libmachine: (multinode-224116-m02)     <interface type='network'>
	I1002 11:01:02.429846  352564 main.go:141] libmachine: (multinode-224116-m02)       <source network='mk-multinode-224116'/>
	I1002 11:01:02.429866  352564 main.go:141] libmachine: (multinode-224116-m02)       <model type='virtio'/>
	I1002 11:01:02.429882  352564 main.go:141] libmachine: (multinode-224116-m02)     </interface>
	I1002 11:01:02.429896  352564 main.go:141] libmachine: (multinode-224116-m02)     <interface type='network'>
	I1002 11:01:02.429911  352564 main.go:141] libmachine: (multinode-224116-m02)       <source network='default'/>
	I1002 11:01:02.429924  352564 main.go:141] libmachine: (multinode-224116-m02)       <model type='virtio'/>
	I1002 11:01:02.429943  352564 main.go:141] libmachine: (multinode-224116-m02)     </interface>
	I1002 11:01:02.429958  352564 main.go:141] libmachine: (multinode-224116-m02)     <serial type='pty'>
	I1002 11:01:02.429971  352564 main.go:141] libmachine: (multinode-224116-m02)       <target port='0'/>
	I1002 11:01:02.429982  352564 main.go:141] libmachine: (multinode-224116-m02)     </serial>
	I1002 11:01:02.429992  352564 main.go:141] libmachine: (multinode-224116-m02)     <console type='pty'>
	I1002 11:01:02.430008  352564 main.go:141] libmachine: (multinode-224116-m02)       <target type='serial' port='0'/>
	I1002 11:01:02.430026  352564 main.go:141] libmachine: (multinode-224116-m02)     </console>
	I1002 11:01:02.430040  352564 main.go:141] libmachine: (multinode-224116-m02)     <rng model='virtio'>
	I1002 11:01:02.430052  352564 main.go:141] libmachine: (multinode-224116-m02)       <backend model='random'>/dev/random</backend>
	I1002 11:01:02.430066  352564 main.go:141] libmachine: (multinode-224116-m02)     </rng>
	I1002 11:01:02.430077  352564 main.go:141] libmachine: (multinode-224116-m02)     
	I1002 11:01:02.430094  352564 main.go:141] libmachine: (multinode-224116-m02)     
	I1002 11:01:02.430105  352564 main.go:141] libmachine: (multinode-224116-m02)   </devices>
	I1002 11:01:02.430113  352564 main.go:141] libmachine: (multinode-224116-m02) </domain>
	I1002 11:01:02.430127  352564 main.go:141] libmachine: (multinode-224116-m02) 
	I1002 11:01:02.437029  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5d:96:29 in network default
	I1002 11:01:02.437602  352564 main.go:141] libmachine: (multinode-224116-m02) Ensuring networks are active...
	I1002 11:01:02.437624  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:02.438235  352564 main.go:141] libmachine: (multinode-224116-m02) Ensuring network default is active
	I1002 11:01:02.438593  352564 main.go:141] libmachine: (multinode-224116-m02) Ensuring network mk-multinode-224116 is active
	I1002 11:01:02.438903  352564 main.go:141] libmachine: (multinode-224116-m02) Getting domain xml...
	I1002 11:01:02.439733  352564 main.go:141] libmachine: (multinode-224116-m02) Creating domain...
	I1002 11:01:03.691654  352564 main.go:141] libmachine: (multinode-224116-m02) Waiting to get IP...
	I1002 11:01:03.692559  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:03.692975  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:03.693000  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:03.692942  352927 retry.go:31] will retry after 194.081662ms: waiting for machine to come up
	I1002 11:01:03.888319  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:03.888741  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:03.888771  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:03.888644  352927 retry.go:31] will retry after 263.751324ms: waiting for machine to come up
	I1002 11:01:04.154143  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:04.154590  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:04.154621  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:04.154509  352927 retry.go:31] will retry after 449.170986ms: waiting for machine to come up
	I1002 11:01:04.605209  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:04.605629  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:04.605662  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:04.605582  352927 retry.go:31] will retry after 507.112733ms: waiting for machine to come up
	I1002 11:01:05.114244  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:05.114645  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:05.114670  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:05.114596  352927 retry.go:31] will retry after 744.76865ms: waiting for machine to come up
	I1002 11:01:05.860644  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:05.861144  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:05.861172  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:05.861083  352927 retry.go:31] will retry after 679.666263ms: waiting for machine to come up
	I1002 11:01:06.543040  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:06.543684  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:06.543714  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:06.543620  352927 retry.go:31] will retry after 1.094999688s: waiting for machine to come up
	I1002 11:01:07.640681  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:07.641112  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:07.641143  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:07.641065  352927 retry.go:31] will retry after 1.37662801s: waiting for machine to come up
	I1002 11:01:09.019550  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:09.020010  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:09.020057  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:09.019945  352927 retry.go:31] will retry after 1.561313122s: waiting for machine to come up
	I1002 11:01:10.583545  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:10.583945  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:10.583995  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:10.583871  352927 retry.go:31] will retry after 1.955952335s: waiting for machine to come up
	I1002 11:01:12.541419  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:12.541892  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:12.541917  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:12.541837  352927 retry.go:31] will retry after 2.849685062s: waiting for machine to come up
	I1002 11:01:15.395128  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:15.395587  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:15.395616  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:15.395543  352927 retry.go:31] will retry after 2.748789645s: waiting for machine to come up
	I1002 11:01:18.145844  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:18.146401  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:18.146501  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:18.146421  352927 retry.go:31] will retry after 3.519602558s: waiting for machine to come up
	I1002 11:01:21.667765  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:21.668265  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find current IP address of domain multinode-224116-m02 in network mk-multinode-224116
	I1002 11:01:21.668295  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | I1002 11:01:21.668215  352927 retry.go:31] will retry after 3.447268301s: waiting for machine to come up
	I1002 11:01:25.119147  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.119576  352564 main.go:141] libmachine: (multinode-224116-m02) Found IP for machine: 192.168.39.135
	I1002 11:01:25.119613  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has current primary IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.119623  352564 main.go:141] libmachine: (multinode-224116-m02) Reserving static IP address...
	I1002 11:01:25.120064  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | unable to find host DHCP lease matching {name: "multinode-224116-m02", mac: "52:54:00:5a:06:6c", ip: "192.168.39.135"} in network mk-multinode-224116
	I1002 11:01:25.192755  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Getting to WaitForSSH function...
	I1002 11:01:25.192798  352564 main.go:141] libmachine: (multinode-224116-m02) Reserved static IP address: 192.168.39.135
	I1002 11:01:25.192814  352564 main.go:141] libmachine: (multinode-224116-m02) Waiting for SSH to be available...
	I1002 11:01:25.195255  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.195630  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.195666  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.195819  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Using SSH client type: external
	I1002 11:01:25.195852  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa (-rw-------)
	I1002 11:01:25.195895  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:01:25.195911  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | About to run SSH command:
	I1002 11:01:25.195928  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | exit 0
	I1002 11:01:25.290580  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | SSH cmd err, output: <nil>: 
	I1002 11:01:25.290830  352564 main.go:141] libmachine: (multinode-224116-m02) KVM machine creation complete!
	I1002 11:01:25.291234  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetConfigRaw
	I1002 11:01:25.291821  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:25.292071  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:25.292299  352564 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 11:01:25.292323  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetState
	I1002 11:01:25.293734  352564 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 11:01:25.293758  352564 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 11:01:25.293766  352564 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 11:01:25.293773  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.296055  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.296440  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.296473  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.296601  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:25.296844  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.297029  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.297174  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:25.297363  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:25.297790  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:25.297806  352564 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 11:01:25.425790  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:01:25.425813  352564 main.go:141] libmachine: Detecting the provisioner...
	I1002 11:01:25.425823  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.428603  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.429006  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.429045  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.429216  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:25.429533  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.429728  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.429875  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:25.430091  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:25.430436  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:25.430449  352564 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 11:01:25.559091  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 11:01:25.559191  352564 main.go:141] libmachine: found compatible host: buildroot
	I1002 11:01:25.559210  352564 main.go:141] libmachine: Provisioning with buildroot...
	I1002 11:01:25.559223  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:01:25.559557  352564 buildroot.go:166] provisioning hostname "multinode-224116-m02"
	I1002 11:01:25.559594  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:01:25.559783  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.562471  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.562866  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.562891  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.563030  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:25.563208  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.563354  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.563462  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:25.563611  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:25.563989  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:25.564007  352564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116-m02 && echo "multinode-224116-m02" | sudo tee /etc/hostname
	I1002 11:01:25.705276  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-224116-m02
	
	I1002 11:01:25.705315  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.708065  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.708454  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.708496  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.708657  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:25.708864  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.709064  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.709199  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:25.709335  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:25.709655  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:25.709674  352564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-224116-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-224116-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-224116-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:01:25.846452  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:01:25.846517  352564 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:01:25.846545  352564 buildroot.go:174] setting up certificates
	I1002 11:01:25.846574  352564 provision.go:83] configureAuth start
	I1002 11:01:25.846593  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:01:25.846925  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:01:25.849818  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.850215  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.850246  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.850413  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.852359  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.852689  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.852724  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.852837  352564 provision.go:138] copyHostCerts
	I1002 11:01:25.852876  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:01:25.852938  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:01:25.852950  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:01:25.853035  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:01:25.853134  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:01:25.853163  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:01:25.853170  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:01:25.853211  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:01:25.853277  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:01:25.853301  352564 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:01:25.853311  352564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:01:25.853345  352564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:01:25.853408  352564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.multinode-224116-m02 san=[192.168.39.135 192.168.39.135 localhost 127.0.0.1 minikube multinode-224116-m02]
	I1002 11:01:25.986655  352564 provision.go:172] copyRemoteCerts
	I1002 11:01:25.986720  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:01:25.986752  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:25.989276  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.989655  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:25.989689  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:25.989891  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:25.990111  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:25.990282  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:25.990472  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:01:26.084805  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:01:26.084896  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:01:26.108153  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:01:26.108243  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 11:01:26.131434  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:01:26.131553  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:01:26.155465  352564 provision.go:86] duration metric: configureAuth took 308.871005ms
	I1002 11:01:26.155495  352564 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:01:26.155676  352564 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:01:26.155762  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:26.158509  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.158903  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.158935  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.159079  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:26.159305  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.159511  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.159646  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:26.159802  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:26.160154  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:26.160172  352564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:01:26.510389  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:01:26.510422  352564 main.go:141] libmachine: Checking connection to Docker...
	I1002 11:01:26.510437  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetURL
	I1002 11:01:26.511821  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | Using libvirt version 6000000
	I1002 11:01:26.514269  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.514714  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.514751  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.514889  352564 main.go:141] libmachine: Docker is up and running!
	I1002 11:01:26.514908  352564 main.go:141] libmachine: Reticulating splines...
	I1002 11:01:26.514917  352564 client.go:171] LocalClient.Create took 24.49354878s
	I1002 11:01:26.514940  352564 start.go:167] duration metric: libmachine.API.Create for "multinode-224116" took 24.493613471s
	I1002 11:01:26.514957  352564 start.go:300] post-start starting for "multinode-224116-m02" (driver="kvm2")
	I1002 11:01:26.514971  352564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:01:26.514999  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:26.515296  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:01:26.515326  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:26.517530  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.517927  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.517964  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.518183  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:26.518389  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.518549  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:26.518711  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:01:26.612321  352564 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:01:26.616434  352564 command_runner.go:130] > NAME=Buildroot
	I1002 11:01:26.616462  352564 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 11:01:26.616468  352564 command_runner.go:130] > ID=buildroot
	I1002 11:01:26.616477  352564 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 11:01:26.616484  352564 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 11:01:26.616665  352564 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:01:26.616688  352564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:01:26.616773  352564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:01:26.616863  352564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:01:26.616877  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 11:01:26.616980  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:01:26.626155  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:01:26.649553  352564 start.go:303] post-start completed in 134.578286ms
	I1002 11:01:26.649619  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetConfigRaw
	I1002 11:01:26.650231  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:01:26.652965  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.653335  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.653370  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.653670  352564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:01:26.653852  352564 start.go:128] duration metric: createHost completed in 24.651029971s
	I1002 11:01:26.653876  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:26.656279  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.656660  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.656690  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.656860  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:26.657080  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.657274  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.657435  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:26.657649  352564 main.go:141] libmachine: Using SSH client type: native
	I1002 11:01:26.657950  352564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:01:26.657962  352564 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:01:26.787366  352564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696244486.769323144
	
	I1002 11:01:26.787395  352564 fix.go:206] guest clock: 1696244486.769323144
	I1002 11:01:26.787405  352564 fix.go:219] Guest: 2023-10-02 11:01:26.769323144 +0000 UTC Remote: 2023-10-02 11:01:26.65386249 +0000 UTC m=+92.067145262 (delta=115.460654ms)
	I1002 11:01:26.787426  352564 fix.go:190] guest clock delta is within tolerance: 115.460654ms
	I1002 11:01:26.787431  352564 start.go:83] releasing machines lock for "multinode-224116-m02", held for 24.78470043s
	I1002 11:01:26.787455  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:26.787764  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:01:26.790467  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.790781  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.790815  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.793588  352564 out.go:177] * Found network options:
	I1002 11:01:26.795192  352564 out.go:177]   - NO_PROXY=192.168.39.165
	W1002 11:01:26.796583  352564 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:01:26.796620  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:26.797249  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:26.797480  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:01:26.797569  352564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:01:26.797611  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	W1002 11:01:26.797753  352564 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:01:26.797847  352564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:01:26.797873  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:01:26.800437  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.800491  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.800835  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.800868  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.800908  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:26.800928  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:26.801029  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:26.801132  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:01:26.801215  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.801326  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:26.801333  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:01:26.801515  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:01:26.801515  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:01:26.801614  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:01:27.051693  352564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 11:01:27.051815  352564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:01:27.058066  352564 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 11:01:27.058106  352564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:01:27.058181  352564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:01:27.074480  352564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1002 11:01:27.074527  352564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:01:27.074535  352564 start.go:469] detecting cgroup driver to use...
	I1002 11:01:27.074597  352564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:01:27.087757  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:01:27.100026  352564 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:01:27.100099  352564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:01:27.112481  352564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:01:27.124936  352564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:01:27.138374  352564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1002 11:01:27.234493  352564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:01:27.351658  352564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 11:01:27.351762  352564 docker.go:213] disabling docker service ...
	I1002 11:01:27.351842  352564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:01:27.365343  352564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:01:27.376171  352564 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1002 11:01:27.377050  352564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:01:27.496678  352564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 11:01:27.496788  352564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:01:27.612854  352564 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1002 11:01:27.612881  352564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 11:01:27.612945  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:01:27.624677  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:01:27.642247  352564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 11:01:27.642289  352564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:01:27.642341  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:01:27.651733  352564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:01:27.651845  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:01:27.661240  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:01:27.670519  352564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:01:27.680503  352564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:01:27.690068  352564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:01:27.697946  352564 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:01:27.698067  352564 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:01:27.698137  352564 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:01:27.711068  352564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:01:27.720512  352564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:01:27.845550  352564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:01:28.013567  352564 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:01:28.013682  352564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:01:28.022130  352564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 11:01:28.022158  352564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 11:01:28.022168  352564 command_runner.go:130] > Device: 16h/22d	Inode: 708         Links: 1
	I1002 11:01:28.022180  352564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:01:28.022188  352564 command_runner.go:130] > Access: 2023-10-02 11:01:27.984536819 +0000
	I1002 11:01:28.022198  352564 command_runner.go:130] > Modify: 2023-10-02 11:01:27.984536819 +0000
	I1002 11:01:28.022204  352564 command_runner.go:130] > Change: 2023-10-02 11:01:27.984536819 +0000
	I1002 11:01:28.022208  352564 command_runner.go:130] >  Birth: -
	I1002 11:01:28.022281  352564 start.go:537] Will wait 60s for crictl version
	I1002 11:01:28.022402  352564 ssh_runner.go:195] Run: which crictl
	I1002 11:01:28.026123  352564 command_runner.go:130] > /usr/bin/crictl
	I1002 11:01:28.026190  352564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:01:28.064768  352564 command_runner.go:130] > Version:  0.1.0
	I1002 11:01:28.064796  352564 command_runner.go:130] > RuntimeName:  cri-o
	I1002 11:01:28.064803  352564 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1002 11:01:28.064812  352564 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 11:01:28.064834  352564 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:01:28.064905  352564 ssh_runner.go:195] Run: crio --version
	I1002 11:01:28.107245  352564 command_runner.go:130] > crio version 1.24.1
	I1002 11:01:28.107267  352564 command_runner.go:130] > Version:          1.24.1
	I1002 11:01:28.107279  352564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:01:28.107286  352564 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:01:28.107295  352564 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:01:28.107303  352564 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:01:28.107309  352564 command_runner.go:130] > Compiler:         gc
	I1002 11:01:28.107316  352564 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:01:28.107325  352564 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:01:28.107347  352564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:01:28.107353  352564 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:01:28.107360  352564 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:01:28.107487  352564 ssh_runner.go:195] Run: crio --version
	I1002 11:01:28.155765  352564 command_runner.go:130] > crio version 1.24.1
	I1002 11:01:28.155790  352564 command_runner.go:130] > Version:          1.24.1
	I1002 11:01:28.155801  352564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:01:28.155808  352564 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:01:28.155816  352564 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:01:28.155824  352564 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:01:28.155829  352564 command_runner.go:130] > Compiler:         gc
	I1002 11:01:28.155835  352564 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:01:28.155842  352564 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:01:28.155853  352564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:01:28.155859  352564 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:01:28.155866  352564 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:01:28.159780  352564 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:01:28.161503  352564 out.go:177]   - env NO_PROXY=192.168.39.165
	I1002 11:01:28.162842  352564 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:01:28.165344  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:28.165721  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:01:28.165759  352564 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:01:28.165934  352564 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:01:28.170191  352564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:01:28.183668  352564 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116 for IP: 192.168.39.135
	I1002 11:01:28.183705  352564 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:01:28.183857  352564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:01:28.183895  352564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:01:28.183908  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:01:28.183921  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:01:28.183934  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:01:28.183947  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:01:28.183997  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:01:28.184033  352564 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:01:28.184049  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:01:28.184087  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:01:28.184129  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:01:28.184164  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:01:28.184207  352564 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:01:28.184235  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:01:28.184250  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 11:01:28.184263  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 11:01:28.184578  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:01:28.210304  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:01:28.232662  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:01:28.255555  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:01:28.279316  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:01:28.302491  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:01:28.326340  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:01:28.351509  352564 ssh_runner.go:195] Run: openssl version
	I1002 11:01:28.357042  352564 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 11:01:28.357224  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:01:28.366837  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:01:28.371194  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:01:28.371235  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:01:28.371283  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:01:28.376701  352564 command_runner.go:130] > 51391683
	I1002 11:01:28.377057  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:01:28.386462  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:01:28.396814  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:01:28.401437  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:01:28.401512  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:01:28.401567  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:01:28.407755  352564 command_runner.go:130] > 3ec20f2e
	I1002 11:01:28.408045  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:01:28.417504  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:01:28.427214  352564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:01:28.431586  352564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:01:28.431613  352564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:01:28.431699  352564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:01:28.437266  352564 command_runner.go:130] > b5213941
	I1002 11:01:28.437352  352564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:01:28.447416  352564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:01:28.451304  352564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:01:28.451482  352564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:01:28.451597  352564 ssh_runner.go:195] Run: crio config
	I1002 11:01:28.506950  352564 command_runner.go:130] ! time="2023-10-02 11:01:28.492146595Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1002 11:01:28.507008  352564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 11:01:28.514899  352564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 11:01:28.514929  352564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 11:01:28.514940  352564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 11:01:28.514945  352564 command_runner.go:130] > #
	I1002 11:01:28.514956  352564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 11:01:28.514965  352564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 11:01:28.514978  352564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 11:01:28.514995  352564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 11:01:28.515004  352564 command_runner.go:130] > # reload'.
	I1002 11:01:28.515015  352564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 11:01:28.515028  352564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 11:01:28.515041  352564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 11:01:28.515054  352564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 11:01:28.515063  352564 command_runner.go:130] > [crio]
	I1002 11:01:28.515077  352564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 11:01:28.515089  352564 command_runner.go:130] > # containers images, in this directory.
	I1002 11:01:28.515100  352564 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1002 11:01:28.515114  352564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 11:01:28.515122  352564 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1002 11:01:28.515129  352564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 11:01:28.515142  352564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 11:01:28.515153  352564 command_runner.go:130] > storage_driver = "overlay"
	I1002 11:01:28.515164  352564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 11:01:28.515178  352564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 11:01:28.515188  352564 command_runner.go:130] > storage_option = [
	I1002 11:01:28.515200  352564 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1002 11:01:28.515209  352564 command_runner.go:130] > ]
	I1002 11:01:28.515222  352564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 11:01:28.515231  352564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 11:01:28.515239  352564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 11:01:28.515253  352564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 11:01:28.515266  352564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 11:01:28.515278  352564 command_runner.go:130] > # always happen on a node reboot
	I1002 11:01:28.515290  352564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 11:01:28.515302  352564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 11:01:28.515315  352564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 11:01:28.515333  352564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 11:01:28.515341  352564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 11:01:28.515357  352564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 11:01:28.515374  352564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 11:01:28.515385  352564 command_runner.go:130] > # internal_wipe = true
	I1002 11:01:28.515398  352564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 11:01:28.515411  352564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 11:01:28.515424  352564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 11:01:28.515435  352564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 11:01:28.515445  352564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 11:01:28.515454  352564 command_runner.go:130] > [crio.api]
	I1002 11:01:28.515467  352564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 11:01:28.515479  352564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 11:01:28.515496  352564 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 11:01:28.515507  352564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 11:01:28.515534  352564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 11:01:28.515545  352564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 11:01:28.515556  352564 command_runner.go:130] > # stream_port = "0"
	I1002 11:01:28.515566  352564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 11:01:28.515577  352564 command_runner.go:130] > # stream_enable_tls = false
	I1002 11:01:28.515590  352564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 11:01:28.515601  352564 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 11:01:28.515614  352564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 11:01:28.515624  352564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 11:01:28.515633  352564 command_runner.go:130] > # minutes.
	I1002 11:01:28.515645  352564 command_runner.go:130] > # stream_tls_cert = ""
	I1002 11:01:28.515658  352564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 11:01:28.515672  352564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 11:01:28.515682  352564 command_runner.go:130] > # stream_tls_key = ""
	I1002 11:01:28.515695  352564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 11:01:28.515706  352564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 11:01:28.515716  352564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 11:01:28.515726  352564 command_runner.go:130] > # stream_tls_ca = ""
	I1002 11:01:28.515742  352564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:01:28.515753  352564 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1002 11:01:28.515769  352564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:01:28.515780  352564 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1002 11:01:28.515804  352564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 11:01:28.515814  352564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 11:01:28.515824  352564 command_runner.go:130] > [crio.runtime]
	I1002 11:01:28.515837  352564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 11:01:28.515850  352564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 11:01:28.515861  352564 command_runner.go:130] > # "nofile=1024:2048"
	I1002 11:01:28.515874  352564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 11:01:28.515884  352564 command_runner.go:130] > # default_ulimits = [
	I1002 11:01:28.515893  352564 command_runner.go:130] > # ]
	I1002 11:01:28.515904  352564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 11:01:28.515911  352564 command_runner.go:130] > # no_pivot = false
	I1002 11:01:28.515923  352564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 11:01:28.515939  352564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 11:01:28.515952  352564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 11:01:28.515966  352564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 11:01:28.515977  352564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 11:01:28.515991  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:01:28.516002  352564 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1002 11:01:28.516011  352564 command_runner.go:130] > # Cgroup setting for conmon
	I1002 11:01:28.516025  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 11:01:28.516036  352564 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 11:01:28.516051  352564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 11:01:28.516063  352564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 11:01:28.516077  352564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:01:28.516087  352564 command_runner.go:130] > conmon_env = [
	I1002 11:01:28.516100  352564 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1002 11:01:28.516107  352564 command_runner.go:130] > ]
	I1002 11:01:28.516113  352564 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 11:01:28.516124  352564 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 11:01:28.516138  352564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 11:01:28.516148  352564 command_runner.go:130] > # default_env = [
	I1002 11:01:28.516158  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516171  352564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 11:01:28.516180  352564 command_runner.go:130] > # selinux = false
	I1002 11:01:28.516193  352564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 11:01:28.516205  352564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 11:01:28.516215  352564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 11:01:28.516225  352564 command_runner.go:130] > # seccomp_profile = ""
	I1002 11:01:28.516238  352564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 11:01:28.516251  352564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 11:01:28.516265  352564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 11:01:28.516276  352564 command_runner.go:130] > # which might increase security.
	I1002 11:01:28.516286  352564 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1002 11:01:28.516300  352564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 11:01:28.516311  352564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 11:01:28.516324  352564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 11:01:28.516338  352564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 11:01:28.516351  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:01:28.516363  352564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 11:01:28.516375  352564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 11:01:28.516386  352564 command_runner.go:130] > # the cgroup blockio controller.
	I1002 11:01:28.516396  352564 command_runner.go:130] > # blockio_config_file = ""
	I1002 11:01:28.516407  352564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 11:01:28.516416  352564 command_runner.go:130] > # irqbalance daemon.
	I1002 11:01:28.516428  352564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 11:01:28.516442  352564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 11:01:28.516455  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:01:28.516465  352564 command_runner.go:130] > # rdt_config_file = ""
	I1002 11:01:28.516477  352564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 11:01:28.516487  352564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 11:01:28.516500  352564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 11:01:28.516509  352564 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 11:01:28.516516  352564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 11:01:28.516532  352564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 11:01:28.516544  352564 command_runner.go:130] > # will be added.
	I1002 11:01:28.516551  352564 command_runner.go:130] > # default_capabilities = [
	I1002 11:01:28.516562  352564 command_runner.go:130] > # 	"CHOWN",
	I1002 11:01:28.516572  352564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 11:01:28.516582  352564 command_runner.go:130] > # 	"FSETID",
	I1002 11:01:28.516592  352564 command_runner.go:130] > # 	"FOWNER",
	I1002 11:01:28.516598  352564 command_runner.go:130] > # 	"SETGID",
	I1002 11:01:28.516608  352564 command_runner.go:130] > # 	"SETUID",
	I1002 11:01:28.516614  352564 command_runner.go:130] > # 	"SETPCAP",
	I1002 11:01:28.516621  352564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 11:01:28.516626  352564 command_runner.go:130] > # 	"KILL",
	I1002 11:01:28.516635  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516649  352564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 11:01:28.516663  352564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:01:28.516673  352564 command_runner.go:130] > # default_sysctls = [
	I1002 11:01:28.516682  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516693  352564 command_runner.go:130] > # List of devices on the host that a
	I1002 11:01:28.516706  352564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 11:01:28.516714  352564 command_runner.go:130] > # allowed_devices = [
	I1002 11:01:28.516721  352564 command_runner.go:130] > # 	"/dev/fuse",
	I1002 11:01:28.516728  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516740  352564 command_runner.go:130] > # List of additional devices. specified as
	I1002 11:01:28.516756  352564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 11:01:28.516769  352564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 11:01:28.516802  352564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:01:28.516812  352564 command_runner.go:130] > # additional_devices = [
	I1002 11:01:28.516820  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516827  352564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 11:01:28.516836  352564 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 11:01:28.516847  352564 command_runner.go:130] > # 	"/etc/cdi",
	I1002 11:01:28.516858  352564 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 11:01:28.516866  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516879  352564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 11:01:28.516892  352564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 11:01:28.516902  352564 command_runner.go:130] > # Defaults to false.
	I1002 11:01:28.516911  352564 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 11:01:28.516923  352564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 11:01:28.516936  352564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 11:01:28.516947  352564 command_runner.go:130] > # hooks_dir = [
	I1002 11:01:28.516958  352564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 11:01:28.516967  352564 command_runner.go:130] > # ]
	I1002 11:01:28.516980  352564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 11:01:28.516993  352564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 11:01:28.517001  352564 command_runner.go:130] > # its default mounts from the following two files:
	I1002 11:01:28.517009  352564 command_runner.go:130] > #
	I1002 11:01:28.517027  352564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 11:01:28.517041  352564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 11:01:28.517053  352564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 11:01:28.517062  352564 command_runner.go:130] > #
	I1002 11:01:28.517075  352564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 11:01:28.517089  352564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 11:01:28.517102  352564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 11:01:28.517111  352564 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 11:01:28.517117  352564 command_runner.go:130] > #
	I1002 11:01:28.517128  352564 command_runner.go:130] > # default_mounts_file = ""
	I1002 11:01:28.517140  352564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 11:01:28.517156  352564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 11:01:28.517167  352564 command_runner.go:130] > pids_limit = 1024
	I1002 11:01:28.517180  352564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 11:01:28.517194  352564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 11:01:28.517206  352564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 11:01:28.517218  352564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 11:01:28.517228  352564 command_runner.go:130] > # log_size_max = -1
	I1002 11:01:28.517243  352564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 11:01:28.517254  352564 command_runner.go:130] > # log_to_journald = false
	I1002 11:01:28.517267  352564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 11:01:28.517279  352564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 11:01:28.517290  352564 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 11:01:28.517302  352564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 11:01:28.517310  352564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 11:01:28.517318  352564 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 11:01:28.517330  352564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 11:01:28.517341  352564 command_runner.go:130] > # read_only = false
	I1002 11:01:28.517351  352564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 11:01:28.517366  352564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 11:01:28.517377  352564 command_runner.go:130] > # live configuration reload.
	I1002 11:01:28.517387  352564 command_runner.go:130] > # log_level = "info"
	I1002 11:01:28.517399  352564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 11:01:28.517409  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:01:28.517416  352564 command_runner.go:130] > # log_filter = ""
	I1002 11:01:28.517425  352564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 11:01:28.517439  352564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 11:01:28.517450  352564 command_runner.go:130] > # separated by comma.
	I1002 11:01:28.517460  352564 command_runner.go:130] > # uid_mappings = ""
	I1002 11:01:28.517473  352564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 11:01:28.517486  352564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 11:01:28.517495  352564 command_runner.go:130] > # separated by comma.
	I1002 11:01:28.517502  352564 command_runner.go:130] > # gid_mappings = ""
	I1002 11:01:28.517510  352564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 11:01:28.517527  352564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:01:28.517541  352564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:01:28.517553  352564 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 11:01:28.517568  352564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 11:01:28.517582  352564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:01:28.517595  352564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:01:28.517604  352564 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 11:01:28.517610  352564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 11:01:28.517623  352564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 11:01:28.517637  352564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 11:01:28.517645  352564 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 11:01:28.517658  352564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 11:01:28.517671  352564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 11:01:28.517683  352564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 11:01:28.517693  352564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 11:01:28.517704  352564 command_runner.go:130] > drop_infra_ctr = false
	I1002 11:01:28.517711  352564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 11:01:28.517724  352564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 11:01:28.517740  352564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 11:01:28.517749  352564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 11:01:28.517762  352564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 11:01:28.517774  352564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 11:01:28.517784  352564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 11:01:28.517798  352564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 11:01:28.517807  352564 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1002 11:01:28.517816  352564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 11:01:28.517830  352564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 11:01:28.517845  352564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 11:01:28.517856  352564 command_runner.go:130] > # default_runtime = "runc"
	I1002 11:01:28.517868  352564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 11:01:28.517883  352564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 11:01:28.517900  352564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 11:01:28.517909  352564 command_runner.go:130] > # creation as a file is not desired either.
	I1002 11:01:28.517924  352564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 11:01:28.517936  352564 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 11:01:28.517948  352564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 11:01:28.517957  352564 command_runner.go:130] > # ]
	I1002 11:01:28.517971  352564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 11:01:28.517984  352564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 11:01:28.517999  352564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 11:01:28.518008  352564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 11:01:28.518017  352564 command_runner.go:130] > #
	I1002 11:01:28.518029  352564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 11:01:28.518041  352564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 11:01:28.518052  352564 command_runner.go:130] > #  runtime_type = "oci"
	I1002 11:01:28.518063  352564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 11:01:28.518074  352564 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 11:01:28.518085  352564 command_runner.go:130] > #  allowed_annotations = []
	I1002 11:01:28.518093  352564 command_runner.go:130] > # Where:
	I1002 11:01:28.518105  352564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 11:01:28.518114  352564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 11:01:28.518128  352564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 11:01:28.518142  352564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 11:01:28.518152  352564 command_runner.go:130] > #   in $PATH.
	I1002 11:01:28.518165  352564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 11:01:28.518177  352564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 11:01:28.518190  352564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 11:01:28.518200  352564 command_runner.go:130] > #   state.
	I1002 11:01:28.518210  352564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 11:01:28.518222  352564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 11:01:28.518236  352564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 11:01:28.518249  352564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 11:01:28.518262  352564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 11:01:28.518276  352564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 11:01:28.518287  352564 command_runner.go:130] > #   The currently recognized values are:
	I1002 11:01:28.518299  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 11:01:28.518312  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 11:01:28.518326  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 11:01:28.518340  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 11:01:28.518369  352564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 11:01:28.518385  352564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 11:01:28.518399  352564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 11:01:28.518413  352564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 11:01:28.518425  352564 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 11:01:28.518436  352564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 11:01:28.518448  352564 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1002 11:01:28.518455  352564 command_runner.go:130] > runtime_type = "oci"
	I1002 11:01:28.518462  352564 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 11:01:28.518472  352564 command_runner.go:130] > runtime_config_path = ""
	I1002 11:01:28.518482  352564 command_runner.go:130] > monitor_path = ""
	I1002 11:01:28.518490  352564 command_runner.go:130] > monitor_cgroup = ""
	I1002 11:01:28.518501  352564 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 11:01:28.518516  352564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 11:01:28.518530  352564 command_runner.go:130] > # running containers
	I1002 11:01:28.518541  352564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 11:01:28.518552  352564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 11:01:28.518589  352564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 11:01:28.518602  352564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 11:01:28.518615  352564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 11:01:28.518627  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 11:01:28.518638  352564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 11:01:28.518649  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 11:01:28.518660  352564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 11:01:28.518669  352564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 11:01:28.518680  352564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 11:01:28.518693  352564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 11:01:28.518707  352564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 11:01:28.518723  352564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 11:01:28.518738  352564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 11:01:28.518751  352564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 11:01:28.518768  352564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 11:01:28.518779  352564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 11:01:28.518792  352564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 11:01:28.518808  352564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 11:01:28.518818  352564 command_runner.go:130] > # Example:
	I1002 11:01:28.518830  352564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 11:01:28.518841  352564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 11:01:28.518853  352564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 11:01:28.518864  352564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 11:01:28.518871  352564 command_runner.go:130] > # cpuset = 0
	I1002 11:01:28.518875  352564 command_runner.go:130] > # cpushares = "0-1"
	I1002 11:01:28.518885  352564 command_runner.go:130] > # Where:
	I1002 11:01:28.518897  352564 command_runner.go:130] > # The workload name is workload-type.
	I1002 11:01:28.518912  352564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 11:01:28.518926  352564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 11:01:28.518939  352564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 11:01:28.518954  352564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 11:01:28.518965  352564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 11:01:28.518971  352564 command_runner.go:130] > # 
	I1002 11:01:28.518981  352564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 11:01:28.518990  352564 command_runner.go:130] > #
	I1002 11:01:28.519001  352564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 11:01:28.519014  352564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 11:01:28.519028  352564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 11:01:28.519041  352564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 11:01:28.519053  352564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 11:01:28.519061  352564 command_runner.go:130] > [crio.image]
	I1002 11:01:28.519067  352564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 11:01:28.519078  352564 command_runner.go:130] > # default_transport = "docker://"
	I1002 11:01:28.519092  352564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 11:01:28.519106  352564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:01:28.519117  352564 command_runner.go:130] > # global_auth_file = ""
	I1002 11:01:28.519128  352564 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 11:01:28.519140  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:01:28.519151  352564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 11:01:28.519161  352564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 11:01:28.519173  352564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:01:28.519186  352564 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:01:28.519198  352564 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 11:01:28.519211  352564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 11:01:28.519225  352564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 11:01:28.519238  352564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 11:01:28.519250  352564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 11:01:28.519257  352564 command_runner.go:130] > # pause_command = "/pause"
	I1002 11:01:28.519267  352564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 11:01:28.519281  352564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 11:01:28.519295  352564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 11:01:28.519310  352564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 11:01:28.519322  352564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 11:01:28.519332  352564 command_runner.go:130] > # signature_policy = ""
	I1002 11:01:28.519344  352564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 11:01:28.519353  352564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 11:01:28.519363  352564 command_runner.go:130] > # changing them here.
	I1002 11:01:28.519374  352564 command_runner.go:130] > # insecure_registries = [
	I1002 11:01:28.519384  352564 command_runner.go:130] > # ]
	I1002 11:01:28.519403  352564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 11:01:28.519414  352564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 11:01:28.519425  352564 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 11:01:28.519436  352564 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 11:01:28.519444  352564 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 11:01:28.519458  352564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 11:01:28.519469  352564 command_runner.go:130] > # CNI plugins.
	I1002 11:01:28.519476  352564 command_runner.go:130] > [crio.network]
	I1002 11:01:28.519490  352564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 11:01:28.519502  352564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 11:01:28.519512  352564 command_runner.go:130] > # cni_default_network = ""
	I1002 11:01:28.519529  352564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 11:01:28.519538  352564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 11:01:28.519548  352564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 11:01:28.519559  352564 command_runner.go:130] > # plugin_dirs = [
	I1002 11:01:28.519569  352564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 11:01:28.519575  352564 command_runner.go:130] > # ]
	I1002 11:01:28.519589  352564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 11:01:28.519598  352564 command_runner.go:130] > [crio.metrics]
	I1002 11:01:28.519607  352564 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 11:01:28.519618  352564 command_runner.go:130] > enable_metrics = true
	I1002 11:01:28.519626  352564 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 11:01:28.519636  352564 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 11:01:28.519642  352564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 11:01:28.519655  352564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 11:01:28.519669  352564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 11:01:28.519676  352564 command_runner.go:130] > # metrics_collectors = [
	I1002 11:01:28.519686  352564 command_runner.go:130] > # 	"operations",
	I1002 11:01:28.519696  352564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 11:01:28.519707  352564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 11:01:28.519715  352564 command_runner.go:130] > # 	"operations_errors",
	I1002 11:01:28.519725  352564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 11:01:28.519733  352564 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 11:01:28.519743  352564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 11:01:28.519753  352564 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 11:01:28.519764  352564 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 11:01:28.519772  352564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 11:01:28.519783  352564 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 11:01:28.519793  352564 command_runner.go:130] > # 	"containers_oom_total",
	I1002 11:01:28.519803  352564 command_runner.go:130] > # 	"containers_oom",
	I1002 11:01:28.519814  352564 command_runner.go:130] > # 	"processes_defunct",
	I1002 11:01:28.519824  352564 command_runner.go:130] > # 	"operations_total",
	I1002 11:01:28.519832  352564 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 11:01:28.519842  352564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 11:01:28.519852  352564 command_runner.go:130] > # 	"operations_errors_total",
	I1002 11:01:28.519864  352564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 11:01:28.519876  352564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 11:01:28.519886  352564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 11:01:28.519897  352564 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 11:01:28.519907  352564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 11:01:28.519917  352564 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 11:01:28.519925  352564 command_runner.go:130] > # ]
	I1002 11:01:28.519934  352564 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 11:01:28.519943  352564 command_runner.go:130] > # metrics_port = 9090
	I1002 11:01:28.519956  352564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 11:01:28.519966  352564 command_runner.go:130] > # metrics_socket = ""
	I1002 11:01:28.519978  352564 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 11:01:28.519991  352564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 11:01:28.520004  352564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 11:01:28.520015  352564 command_runner.go:130] > # certificate on any modification event.
	I1002 11:01:28.520022  352564 command_runner.go:130] > # metrics_cert = ""
	I1002 11:01:28.520030  352564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 11:01:28.520043  352564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 11:01:28.520054  352564 command_runner.go:130] > # metrics_key = ""
	I1002 11:01:28.520068  352564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 11:01:28.520077  352564 command_runner.go:130] > [crio.tracing]
	I1002 11:01:28.520090  352564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 11:01:28.520099  352564 command_runner.go:130] > # enable_tracing = false
	I1002 11:01:28.520108  352564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 11:01:28.520118  352564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 11:01:28.520130  352564 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 11:01:28.520141  352564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 11:01:28.520154  352564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 11:01:28.520164  352564 command_runner.go:130] > [crio.stats]
	I1002 11:01:28.520177  352564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 11:01:28.520189  352564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 11:01:28.520198  352564 command_runner.go:130] > # stats_collection_period = 0
	I1002 11:01:28.520279  352564 cni.go:84] Creating CNI manager for ""
	I1002 11:01:28.520291  352564 cni.go:136] 2 nodes found, recommending kindnet
	I1002 11:01:28.520300  352564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:01:28.520327  352564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-224116 NodeName:multinode-224116-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:01:28.520474  352564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-224116-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:01:28.520546  352564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-224116-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:01:28.520612  352564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:01:28.529606  352564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	I1002 11:01:28.529802  352564 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	
	Initiating transfer...
	I1002 11:01:28.529875  352564 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
	I1002 11:01:28.538908  352564 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
	I1002 11:01:28.538942  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
	I1002 11:01:28.538987  352564 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubelet
	I1002 11:01:28.539016  352564 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubeadm
	I1002 11:01:28.539018  352564 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
	I1002 11:01:28.546680  352564 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1002 11:01:28.546722  352564 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1002 11:01:28.546748  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
	I1002 11:01:29.549579  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1002 11:01:29.549667  352564 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1002 11:01:29.554543  352564 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1002 11:01:29.554823  352564 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1002 11:01:29.554867  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
	I1002 11:01:30.774598  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:01:30.787753  352564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
	I1002 11:01:30.787845  352564 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
	I1002 11:01:30.792138  352564 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1002 11:01:30.792186  352564 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1002 11:01:30.792218  352564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
	I1002 11:01:31.280371  352564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 11:01:31.289590  352564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1002 11:01:31.305610  352564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:01:31.321568  352564 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1002 11:01:31.325240  352564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:01:31.337288  352564 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:01:31.337514  352564 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:01:31.337685  352564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:01:31.337725  352564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:01:31.354128  352564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I1002 11:01:31.354678  352564 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:01:31.355136  352564 main.go:141] libmachine: Using API Version  1
	I1002 11:01:31.355158  352564 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:01:31.355452  352564 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:01:31.355672  352564 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:01:31.355822  352564 start.go:304] JoinCluster: &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:01:31.355937  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 11:01:31.355952  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:01:31.358721  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:01:31.359265  352564 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:01:31.359304  352564 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:01:31.359491  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:01:31.359711  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:01:31.359878  352564 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:01:31.360040  352564 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:01:31.537941  352564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token drws1w.jn9f5s3pg7ksq1pt --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:01:31.540648  352564 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:01:31.540690  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token drws1w.jn9f5s3pg7ksq1pt --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-224116-m02"
	I1002 11:01:31.591154  352564 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:01:31.734813  352564 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:01:31.734840  352564 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:01:31.779309  352564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:01:31.779340  352564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:01:31.779348  352564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 11:01:31.900673  352564 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 11:01:33.913640  352564 command_runner.go:130] > This node has joined the cluster:
	I1002 11:01:33.913670  352564 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 11:01:33.913677  352564 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 11:01:33.913683  352564 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 11:01:33.915835  352564 command_runner.go:130] ! W1002 11:01:31.580644     824 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1002 11:01:33.915856  352564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:01:33.915882  352564 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token drws1w.jn9f5s3pg7ksq1pt --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-224116-m02": (2.375176309s)
	I1002 11:01:33.915907  352564 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 11:01:34.167142  352564 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1002 11:01:34.167257  352564 start.go:306] JoinCluster complete in 2.811430133s
	I1002 11:01:34.167286  352564 cni.go:84] Creating CNI manager for ""
	I1002 11:01:34.167298  352564 cni.go:136] 2 nodes found, recommending kindnet
	I1002 11:01:34.167362  352564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:01:34.173304  352564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 11:01:34.173331  352564 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 11:01:34.173341  352564 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 11:01:34.173350  352564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:01:34.173361  352564 command_runner.go:130] > Access: 2023-10-02 11:00:08.166811444 +0000
	I1002 11:01:34.173370  352564 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 11:01:34.173383  352564 command_runner.go:130] > Change: 2023-10-02 11:00:06.319811444 +0000
	I1002 11:01:34.173390  352564 command_runner.go:130] >  Birth: -
	I1002 11:01:34.173519  352564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:01:34.173540  352564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:01:34.190700  352564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:01:34.556423  352564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:01:34.556452  352564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:01:34.556459  352564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 11:01:34.556464  352564 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 11:01:34.556840  352564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:01:34.557179  352564 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:01:34.557668  352564 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:01:34.557691  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:34.557701  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:34.557709  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:34.560294  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:34.560315  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:34.560322  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:34.560328  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:34.560333  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:34.560338  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:34.560343  352564 round_trippers.go:580]     Content-Length: 291
	I1002 11:01:34.560348  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:34 GMT
	I1002 11:01:34.560353  352564 round_trippers.go:580]     Audit-Id: 7b2f0e96-3d2b-43fc-80fb-510c3a113920
	I1002 11:01:34.560421  352564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"411","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 11:01:34.560532  352564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-224116" context rescaled to 1 replicas
	I1002 11:01:34.560558  352564 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:01:34.563600  352564 out.go:177] * Verifying Kubernetes components...
	I1002 11:01:34.565135  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:01:34.579246  352564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:01:34.579459  352564 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:01:34.579688  352564 node_ready.go:35] waiting up to 6m0s for node "multinode-224116-m02" to be "Ready" ...
	I1002 11:01:34.579747  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:34.579754  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:34.579762  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:34.579768  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:34.582550  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:34.582570  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:34.582580  352564 round_trippers.go:580]     Audit-Id: 66e321da-47f4-4254-92da-086c4d1f5eec
	I1002 11:01:34.582587  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:34.582595  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:34.582607  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:34.582620  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:34.582632  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:34.582641  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:34 GMT
	I1002 11:01:34.582913  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:34.583277  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:34.583291  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:34.583302  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:34.583311  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:34.585336  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:34.585352  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:34.585361  352564 round_trippers.go:580]     Audit-Id: c8dacc56-13c6-4538-ab1f-6ee8e2709ad3
	I1002 11:01:34.585369  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:34.585376  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:34.585384  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:34.585392  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:34.585403  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:34.585412  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:34 GMT
	I1002 11:01:34.585510  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:35.086963  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:35.086986  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:35.086994  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:35.087001  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:35.090388  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:35.090418  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:35.090429  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:35 GMT
	I1002 11:01:35.090437  352564 round_trippers.go:580]     Audit-Id: 47a0b328-f57d-42aa-847a-6afc3538e49d
	I1002 11:01:35.090444  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:35.090451  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:35.090458  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:35.090466  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:35.090475  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:35.090585  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:35.586085  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:35.586110  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:35.586119  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:35.586125  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:35.588939  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:35.588967  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:35.588978  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:35.588988  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:35.588995  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:35.589001  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:35.589008  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:35.589017  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:35 GMT
	I1002 11:01:35.589027  352564 round_trippers.go:580]     Audit-Id: 45019baa-5178-4c3a-861d-de2d8150cc04
	I1002 11:01:35.589099  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:36.086747  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:36.086770  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:36.086778  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:36.086785  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:36.089758  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:36.089783  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:36.089794  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:36 GMT
	I1002 11:01:36.089803  352564 round_trippers.go:580]     Audit-Id: a48356a6-6807-436e-b6cd-1b1e3c32fab0
	I1002 11:01:36.089812  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:36.089830  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:36.089841  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:36.089847  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:36.089853  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:36.089941  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:36.586156  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:36.586185  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:36.586198  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:36.586208  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:36.589529  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:36.589552  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:36.589559  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:36.589565  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:36.589570  352564 round_trippers.go:580]     Content-Length: 3531
	I1002 11:01:36.589575  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:36 GMT
	I1002 11:01:36.589582  352564 round_trippers.go:580]     Audit-Id: 20dcfa2c-a527-462e-a51f-7c857703af98
	I1002 11:01:36.589591  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:36.589599  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:36.589878  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"461","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1002 11:01:36.590171  352564 node_ready.go:58] node "multinode-224116-m02" has status "Ready":"False"
	I1002 11:01:37.086946  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:37.086976  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:37.086991  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:37.086999  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:37.092566  352564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 11:01:37.092593  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:37.092602  352564 round_trippers.go:580]     Audit-Id: c7c4529a-a97f-44bb-a607-52744fdaf6cf
	I1002 11:01:37.092611  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:37.092618  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:37.092626  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:37.092635  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:37.092647  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:37.092657  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:37 GMT
	I1002 11:01:37.092783  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:37.586075  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:37.586101  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:37.586109  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:37.586115  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:37.589094  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:37.589123  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:37.589135  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:37.589145  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:37 GMT
	I1002 11:01:37.589152  352564 round_trippers.go:580]     Audit-Id: a5c61d09-7769-4c00-a44c-5cd9f26310e1
	I1002 11:01:37.589164  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:37.589176  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:37.589187  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:37.589193  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:37.589284  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:38.086849  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:38.086888  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:38.086899  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:38.086908  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:38.090074  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:38.090103  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:38.090114  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:38.090123  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:38.090131  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:38 GMT
	I1002 11:01:38.090140  352564 round_trippers.go:580]     Audit-Id: 6e206d88-ce55-4148-af12-48ab69a7d7d9
	I1002 11:01:38.090159  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:38.090174  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:38.090185  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:38.090400  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:38.586773  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:38.586807  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:38.586826  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:38.586836  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:38.589584  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:38.589612  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:38.589624  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:38.589634  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:38 GMT
	I1002 11:01:38.589642  352564 round_trippers.go:580]     Audit-Id: 8b19155c-b9cf-4faf-8996-614c1b230617
	I1002 11:01:38.589652  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:38.589664  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:38.589672  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:38.589684  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:38.589778  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:39.086301  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:39.086329  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:39.086337  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:39.086343  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:39.089397  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:39.089429  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:39.089451  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:39.089460  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:39.089468  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:39.089477  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:39 GMT
	I1002 11:01:39.089487  352564 round_trippers.go:580]     Audit-Id: 3cbcb82a-c1d2-4f0a-96df-cae55b08a6a2
	I1002 11:01:39.089498  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:39.089510  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:39.089592  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:39.089890  352564 node_ready.go:58] node "multinode-224116-m02" has status "Ready":"False"
	I1002 11:01:39.586101  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:39.586126  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:39.586134  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:39.586140  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:39.588841  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:39.588863  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:39.588870  352564 round_trippers.go:580]     Audit-Id: 9d214fe4-7fe4-461a-9633-3ed6d6bc925a
	I1002 11:01:39.588876  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:39.588895  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:39.588903  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:39.588912  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:39.588919  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:39.588927  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:39 GMT
	I1002 11:01:39.589058  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:40.086238  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:40.086263  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:40.086272  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:40.086278  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:40.089235  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:40.089257  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:40.089265  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:40.089271  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:40 GMT
	I1002 11:01:40.089277  352564 round_trippers.go:580]     Audit-Id: 454bfc32-11fc-45b8-8b34-fb03c13d76e6
	I1002 11:01:40.089282  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:40.089287  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:40.089293  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:40.089298  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:40.089395  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:40.586991  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:40.587016  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:40.587043  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:40.587049  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:40.589714  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:40.589738  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:40.589750  352564 round_trippers.go:580]     Audit-Id: 9f676cca-94f8-42da-82b1-aa6c0b1ff1fd
	I1002 11:01:40.589759  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:40.589768  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:40.589777  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:40.589784  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:40.589789  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:40.589796  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:40 GMT
	I1002 11:01:40.589873  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:41.086410  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:41.086437  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:41.086446  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:41.086452  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:41.089279  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:41.089304  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:41.089312  352564 round_trippers.go:580]     Audit-Id: ed8c2537-220c-4a08-842e-722182054bb8
	I1002 11:01:41.089317  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:41.089322  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:41.089327  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:41.089332  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:41.089337  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:41.089342  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:41 GMT
	I1002 11:01:41.089493  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:41.586776  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:41.586799  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:41.586808  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:41.586815  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:41.590265  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:41.590291  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:41.590298  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:41.590304  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:41.590309  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:41 GMT
	I1002 11:01:41.590314  352564 round_trippers.go:580]     Audit-Id: 43edb8e0-d667-4f05-b403-77e31bb2093c
	I1002 11:01:41.590321  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:41.590330  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:41.590338  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:41.590572  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:41.590894  352564 node_ready.go:58] node "multinode-224116-m02" has status "Ready":"False"
	I1002 11:01:42.086692  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:42.086715  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:42.086724  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:42.086730  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:42.090290  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:42.090315  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:42.090326  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:42.090336  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:42.090345  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:42 GMT
	I1002 11:01:42.090364  352564 round_trippers.go:580]     Audit-Id: 5e52bd95-533b-432f-8830-352a7bbcc237
	I1002 11:01:42.090377  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:42.090388  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:42.090398  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:42.090504  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:42.586033  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:42.586059  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:42.586067  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:42.586074  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:42.589554  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:42.589576  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:42.589583  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:42.589589  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:42.589594  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:42.589599  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:42.589604  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:42.589609  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:42 GMT
	I1002 11:01:42.589616  352564 round_trippers.go:580]     Audit-Id: d3128737-4d87-4f2e-bfb8-8f07c04c7291
	I1002 11:01:42.589698  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:43.086548  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:43.086573  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:43.086582  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:43.086588  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:43.089707  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:43.089740  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:43.089749  352564 round_trippers.go:580]     Audit-Id: eacd80bd-a7ba-447c-9b17-2ccc1981ee5a
	I1002 11:01:43.089755  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:43.089761  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:43.089766  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:43.089774  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:43.089787  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:43.089795  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:43 GMT
	I1002 11:01:43.089887  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:43.586424  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:43.586446  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:43.586455  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:43.586461  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:43.588806  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:43.588826  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:43.588832  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:43.588840  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:43.588849  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:43.588858  352564 round_trippers.go:580]     Content-Length: 3640
	I1002 11:01:43.588869  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:43 GMT
	I1002 11:01:43.588878  352564 round_trippers.go:580]     Audit-Id: a0c42126-7575-45fc-af5f-9e6c8540dc22
	I1002 11:01:43.588887  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:43.588969  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"468","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1002 11:01:44.086491  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:44.086514  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:44.086522  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:44.086528  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:44.090615  352564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:01:44.090636  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:44.090643  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:44.090649  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:44.090654  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:44.090659  352564 round_trippers.go:580]     Content-Length: 3909
	I1002 11:01:44.090664  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:44 GMT
	I1002 11:01:44.090669  352564 round_trippers.go:580]     Audit-Id: 8988200e-dd13-4879-8cf5-e3acf1962d54
	I1002 11:01:44.090675  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:44.090920  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"487","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2885 chars]
	I1002 11:01:44.091264  352564 node_ready.go:58] node "multinode-224116-m02" has status "Ready":"False"
	I1002 11:01:44.586542  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:44.586566  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:44.586575  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:44.586581  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:44.589845  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:44.589872  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:44.589882  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:44.589890  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:44.589899  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:44.589912  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:44.589919  352564 round_trippers.go:580]     Content-Length: 3909
	I1002 11:01:44.589926  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:44 GMT
	I1002 11:01:44.589934  352564 round_trippers.go:580]     Audit-Id: bb0624cf-f3d5-4f79-87ae-c9fdea080ee9
	I1002 11:01:44.590022  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"487","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2885 chars]
	I1002 11:01:45.086291  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:45.086318  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.086326  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.086332  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.089533  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:45.089553  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.089560  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.089565  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.089570  352564 round_trippers.go:580]     Content-Length: 3726
	I1002 11:01:45.089576  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.089581  352564 round_trippers.go:580]     Audit-Id: 5685a2d5-ff69-4d55-9ddf-43dea55e9f66
	I1002 11:01:45.089587  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.089597  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.089677  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"492","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1002 11:01:45.089922  352564 node_ready.go:49] node "multinode-224116-m02" has status "Ready":"True"
	I1002 11:01:45.089937  352564 node_ready.go:38] duration metric: took 10.510235189s waiting for node "multinode-224116-m02" to be "Ready" ...
	I1002 11:01:45.089946  352564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:01:45.090003  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:01:45.090010  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.090017  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.090023  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.093858  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:45.093883  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.093891  352564 round_trippers.go:580]     Audit-Id: b8db7d67-213c-45a2-958a-3fdbb7625bb2
	I1002 11:01:45.093897  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.093912  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.093924  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.093933  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.093943  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.094838  352564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"407","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67372 chars]
	I1002 11:01:45.097743  352564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.097830  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:01:45.097842  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.097853  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.097866  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.100370  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:45.100392  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.100402  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.100409  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.100415  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.100421  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.100430  352564 round_trippers.go:580]     Audit-Id: ef342f6e-0d23-46b7-8cb5-3c27c8d22af8
	I1002 11:01:45.100435  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.100664  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"407","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1002 11:01:45.101069  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.101081  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.101088  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.101094  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.102935  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.102951  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.102964  352564 round_trippers.go:580]     Audit-Id: 0da71a20-9270-4fe3-ae7f-725c495f6b45
	I1002 11:01:45.102974  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.102987  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.102999  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.103010  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.103016  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.103170  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:45.103433  352564 pod_ready.go:92] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.103445  352564 pod_ready.go:81] duration metric: took 5.679518ms waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.103453  352564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.103517  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:01:45.103525  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.103532  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.103538  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.105342  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.105358  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.105366  352564 round_trippers.go:580]     Audit-Id: 8aaa5b09-460f-42eb-88b6-3ab44493954e
	I1002 11:01:45.105374  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.105381  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.105389  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.105397  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.105406  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.105598  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"402","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1002 11:01:45.105978  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.105994  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.106001  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.106008  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.107842  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.107857  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.107863  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.107868  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.107873  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.107879  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.107885  352564 round_trippers.go:580]     Audit-Id: c159735c-2617-4796-bbb5-991e795a104d
	I1002 11:01:45.107891  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.108110  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:45.108374  352564 pod_ready.go:92] pod "etcd-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.108388  352564 pod_ready.go:81] duration metric: took 4.909614ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.108401  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.108443  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:01:45.108451  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.108458  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.108464  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.110442  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.110466  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.110475  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.110483  352564 round_trippers.go:580]     Audit-Id: eca30fdb-308e-458c-8f97-935e7bd07265
	I1002 11:01:45.110491  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.110499  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.110512  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.110521  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.110660  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"302","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1002 11:01:45.111037  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.111049  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.111056  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.111062  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.112719  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.112737  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.112746  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.112754  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.112761  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.112769  352564 round_trippers.go:580]     Audit-Id: cbf48511-10de-4d72-9b29-9304c7fb2c0e
	I1002 11:01:45.112776  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.112784  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.112906  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:45.113148  352564 pod_ready.go:92] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.113159  352564 pod_ready.go:81] duration metric: took 4.753074ms waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.113166  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.113205  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:01:45.113213  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.113219  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.113225  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.114883  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.114896  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.114902  352564 round_trippers.go:580]     Audit-Id: 550326db-9a8f-42e0-ba7f-ae7fe4a0da3b
	I1002 11:01:45.114910  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.114917  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.114925  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.114933  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.114942  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.115195  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"403","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1002 11:01:45.115517  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.115528  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.115536  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.115542  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.117306  352564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:01:45.117325  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.117334  352564 round_trippers.go:580]     Audit-Id: 2ad8f1bb-7107-4129-bf65-99848cf285a1
	I1002 11:01:45.117342  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.117349  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.117354  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.117360  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.117369  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.117518  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:45.117929  352564 pod_ready.go:92] pod "kube-controller-manager-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.117950  352564 pod_ready.go:81] duration metric: took 4.77716ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.117961  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.286296  352564 request.go:629] Waited for 168.261314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:01:45.286408  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:01:45.286416  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.286426  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.286436  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.289394  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:45.289418  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.289426  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.289432  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.289438  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.289443  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.289453  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.289458  352564 round_trippers.go:580]     Audit-Id: 23e7af44-5c2b-4bd4-9c34-26aa63381b6d
	I1002 11:01:45.289624  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"375","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:01:45.486436  352564 request.go:629] Waited for 196.325422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.486538  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:45.486545  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.486554  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.486562  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.489274  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:45.489304  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.489315  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.489328  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.489334  352564 round_trippers.go:580]     Audit-Id: fc815fc8-b3db-40c7-8307-c21935e78173
	I1002 11:01:45.489343  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.489352  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.489359  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.489486  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:45.489800  352564 pod_ready.go:92] pod "kube-proxy-nshcj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.489814  352564 pod_ready.go:81] duration metric: took 371.846172ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.489823  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.686945  352564 request.go:629] Waited for 197.053193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:01:45.687021  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:01:45.687026  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.687034  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.687042  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.690108  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:45.690144  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.690153  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.690159  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.690164  352564 round_trippers.go:580]     Audit-Id: b5d8b43e-6716-4701-a699-db1795987e62
	I1002 11:01:45.690170  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.690175  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.690181  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.690372  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rdt77","generateName":"kube-proxy-","namespace":"kube-system","uid":"96482fa7-e7e4-4375-b3b6-cc24f41d4bcf","resourceVersion":"477","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:01:45.887252  352564 request.go:629] Waited for 196.366832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:45.887353  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:01:45.887362  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:45.887370  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:45.887377  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:45.889914  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:45.889934  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:45.889941  352564 round_trippers.go:580]     Audit-Id: 4f4c8961-0ade-4c61-b123-6c03d4ab9ca2
	I1002 11:01:45.889947  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:45.889952  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:45.889957  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:45.889962  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:45.889968  352564 round_trippers.go:580]     Content-Length: 3726
	I1002 11:01:45.889973  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:45 GMT
	I1002 11:01:45.890035  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"492","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1002 11:01:45.890276  352564 pod_ready.go:92] pod "kube-proxy-rdt77" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:45.890290  352564 pod_ready.go:81] duration metric: took 400.461537ms waiting for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:45.890299  352564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:46.086799  352564 request.go:629] Waited for 196.419152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:01:46.086885  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:01:46.086890  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:46.086898  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:46.086910  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:46.089614  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:46.089635  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:46.089644  352564 round_trippers.go:580]     Audit-Id: d6139f7c-8803-4dfd-b835-3f3b6f94b28f
	I1002 11:01:46.089653  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:46.089661  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:46.089668  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:46.089676  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:46.089684  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:46 GMT
	I1002 11:01:46.089780  352564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"361","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1002 11:01:46.286598  352564 request.go:629] Waited for 196.365354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:46.286668  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:01:46.286673  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:46.286680  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:46.286686  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:46.289630  352564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:01:46.289654  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:46.289661  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:46.289667  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:46.289672  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:46 GMT
	I1002 11:01:46.289676  352564 round_trippers.go:580]     Audit-Id: 461b51de-4e7a-4a4b-aefd-5b27531ef92c
	I1002 11:01:46.289681  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:46.289686  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:46.289851  352564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1002 11:01:46.290171  352564 pod_ready.go:92] pod "kube-scheduler-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:01:46.290185  352564 pod_ready.go:81] duration metric: took 399.880754ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:01:46.290206  352564 pod_ready.go:38] duration metric: took 1.200251937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:01:46.290226  352564 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:01:46.290276  352564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:01:46.304305  352564 system_svc.go:56] duration metric: took 14.070786ms WaitForService to wait for kubelet.
	I1002 11:01:46.304333  352564 kubeadm.go:581] duration metric: took 11.743752027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:01:46.304362  352564 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:01:46.487186  352564 request.go:629] Waited for 182.736722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I1002 11:01:46.487259  352564 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:01:46.487265  352564 round_trippers.go:469] Request Headers:
	I1002 11:01:46.487273  352564 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:01:46.487286  352564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:01:46.490565  352564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:01:46.490594  352564 round_trippers.go:577] Response Headers:
	I1002 11:01:46.490602  352564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:01:46.490612  352564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:01:46.490620  352564 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:01:46 GMT
	I1002 11:01:46.490629  352564 round_trippers.go:580]     Audit-Id: 88d540dc-630b-43f9-b579-742a8085dbff
	I1002 11:01:46.490637  352564 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:01:46.490657  352564 round_trippers.go:580]     Content-Type: application/json
	I1002 11:01:46.490973  352564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"385","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I1002 11:01:46.491452  352564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:01:46.491472  352564 node_conditions.go:123] node cpu capacity is 2
	I1002 11:01:46.491486  352564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:01:46.491491  352564 node_conditions.go:123] node cpu capacity is 2
	I1002 11:01:46.491497  352564 node_conditions.go:105] duration metric: took 187.129599ms to run NodePressure ...
	I1002 11:01:46.491511  352564 start.go:228] waiting for startup goroutines ...
	I1002 11:01:46.491535  352564 start.go:242] writing updated cluster config ...
	I1002 11:01:46.491840  352564 ssh_runner.go:195] Run: rm -f paused
	I1002 11:01:46.541623  352564 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:01:46.544336  352564 out.go:177] * Done! kubectl is now configured to use "multinode-224116" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:00:07 UTC, ends at Mon 2023-10-02 11:01:54 UTC. --
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.521968405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244514521952867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4288ec9a-f27f-4796-8416-e484cfccf254 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.523040819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=45bdba91-c0b8-4d65-b17d-59e5da894dd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.523113085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=45bdba91-c0b8-4d65-b17d-59e5da894dd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.523302574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fdee759f007aa115011d598c7b33e3da5b87be669e0040644900bc87ab03add,PodSandboxId:e411f57ed3b6ef95020a05a8d09f31f4a0644d0567c75fec27a32b891843adfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696244510782684009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f86f4dfd5dfa54ec5c1f483a59fb2fdc2b494f3dc28a64ffc189cb7343fe072,PodSandboxId:10ea55596ee7197edf14e577c6e51e8d3d0843a68e7b3b11b481bd914da6f197,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696244459104452640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8dbea602827ec736f5ff7c46bed7b8218dd15378d03d8e2346e21e24dba5a09,PodSandboxId:c0f4fe09159ccac3a0a5960f3eeea52b52813f2664c8514ad2e6975fc274b239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696244458895354863,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5aaee19d6b364450393011ecdfc09737a14e2486a67766e4944f5c9b0c8188,PodSandboxId:11cc155f2083295f75fb60f1e1994dd45fb76fb0d87733df70f6caadd9017f82,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696244456301510637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b19893f54486b6484a1f9241a3982a2eae7fb262fb55925e4755e5ed4f6295c,PodSandboxId:9cb7b4dd451155348239eca1b2b365e3f3741b2fb97f6cac820e6f9731e74ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696244454114149323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1307a3a9921920520255ae163271d26f1d7c3e1d19b8664705f530daaacf8388,PodSandboxId:71bd8e8ec573e0ae184c8b3be1b517c34dd90e85d931a2fb923694f4c456dce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696244432575037860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5e5c1ef88d8882fd253ffe07bf432d41efc126835dcb8254723cc53188864c,PodSandboxId:ee2a8d2beacd34e7ea59fab3e8d9584593b44396c6158706d88cac3dae2853f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696244432374051900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.container.ha
sh: fbfb7459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed,PodSandboxId:5455715575c51f4d810e8dd821cea56ee41918c88406438ec16d70d83807bc9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696244432131810625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f3032f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ae459a9147bd9fc3d6d296258438e989fffa15ded215e27a512339b6e61fdd,PodSandboxId:e9ccdaa9bf45dc39e23a548058efdcb73d38cb6c8dcbb14953c6b556b43f0574,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696244432089570082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=45bdba91-c0b8-4d65-b17d-59e5da894dd5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.567552983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7eaff82c-3cf6-44d5-9360-c3b9190cd73f name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.567688866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7eaff82c-3cf6-44d5-9360-c3b9190cd73f name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.569015708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a8a36dcd-1160-4b3d-944a-7c722c6d1954 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.569415895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244514569400911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a8a36dcd-1160-4b3d-944a-7c722c6d1954 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.570201163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=383c6ad7-d735-4e93-b383-cd85f4ed7e4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.570281537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=383c6ad7-d735-4e93-b383-cd85f4ed7e4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.570482842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fdee759f007aa115011d598c7b33e3da5b87be669e0040644900bc87ab03add,PodSandboxId:e411f57ed3b6ef95020a05a8d09f31f4a0644d0567c75fec27a32b891843adfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696244510782684009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f86f4dfd5dfa54ec5c1f483a59fb2fdc2b494f3dc28a64ffc189cb7343fe072,PodSandboxId:10ea55596ee7197edf14e577c6e51e8d3d0843a68e7b3b11b481bd914da6f197,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696244459104452640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8dbea602827ec736f5ff7c46bed7b8218dd15378d03d8e2346e21e24dba5a09,PodSandboxId:c0f4fe09159ccac3a0a5960f3eeea52b52813f2664c8514ad2e6975fc274b239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696244458895354863,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5aaee19d6b364450393011ecdfc09737a14e2486a67766e4944f5c9b0c8188,PodSandboxId:11cc155f2083295f75fb60f1e1994dd45fb76fb0d87733df70f6caadd9017f82,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696244456301510637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b19893f54486b6484a1f9241a3982a2eae7fb262fb55925e4755e5ed4f6295c,PodSandboxId:9cb7b4dd451155348239eca1b2b365e3f3741b2fb97f6cac820e6f9731e74ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696244454114149323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1307a3a9921920520255ae163271d26f1d7c3e1d19b8664705f530daaacf8388,PodSandboxId:71bd8e8ec573e0ae184c8b3be1b517c34dd90e85d931a2fb923694f4c456dce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696244432575037860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5e5c1ef88d8882fd253ffe07bf432d41efc126835dcb8254723cc53188864c,PodSandboxId:ee2a8d2beacd34e7ea59fab3e8d9584593b44396c6158706d88cac3dae2853f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696244432374051900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.container.ha
sh: fbfb7459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed,PodSandboxId:5455715575c51f4d810e8dd821cea56ee41918c88406438ec16d70d83807bc9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696244432131810625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f3032f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ae459a9147bd9fc3d6d296258438e989fffa15ded215e27a512339b6e61fdd,PodSandboxId:e9ccdaa9bf45dc39e23a548058efdcb73d38cb6c8dcbb14953c6b556b43f0574,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696244432089570082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=383c6ad7-d735-4e93-b383-cd85f4ed7e4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.608698202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eccdcf1b-5345-45a4-8efc-cecb1635d24a name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.608782135Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eccdcf1b-5345-45a4-8efc-cecb1635d24a name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.610307190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=532d5353-6899-45ef-8d0a-3fe28abb0926 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.610698886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244514610683791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=532d5353-6899-45ef-8d0a-3fe28abb0926 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.611536206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5dc279e-4d6b-46cc-a025-c16e06567666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.611632219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5dc279e-4d6b-46cc-a025-c16e06567666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.611902158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fdee759f007aa115011d598c7b33e3da5b87be669e0040644900bc87ab03add,PodSandboxId:e411f57ed3b6ef95020a05a8d09f31f4a0644d0567c75fec27a32b891843adfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696244510782684009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f86f4dfd5dfa54ec5c1f483a59fb2fdc2b494f3dc28a64ffc189cb7343fe072,PodSandboxId:10ea55596ee7197edf14e577c6e51e8d3d0843a68e7b3b11b481bd914da6f197,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696244459104452640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8dbea602827ec736f5ff7c46bed7b8218dd15378d03d8e2346e21e24dba5a09,PodSandboxId:c0f4fe09159ccac3a0a5960f3eeea52b52813f2664c8514ad2e6975fc274b239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696244458895354863,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5aaee19d6b364450393011ecdfc09737a14e2486a67766e4944f5c9b0c8188,PodSandboxId:11cc155f2083295f75fb60f1e1994dd45fb76fb0d87733df70f6caadd9017f82,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696244456301510637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b19893f54486b6484a1f9241a3982a2eae7fb262fb55925e4755e5ed4f6295c,PodSandboxId:9cb7b4dd451155348239eca1b2b365e3f3741b2fb97f6cac820e6f9731e74ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696244454114149323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1307a3a9921920520255ae163271d26f1d7c3e1d19b8664705f530daaacf8388,PodSandboxId:71bd8e8ec573e0ae184c8b3be1b517c34dd90e85d931a2fb923694f4c456dce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696244432575037860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5e5c1ef88d8882fd253ffe07bf432d41efc126835dcb8254723cc53188864c,PodSandboxId:ee2a8d2beacd34e7ea59fab3e8d9584593b44396c6158706d88cac3dae2853f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696244432374051900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.container.ha
sh: fbfb7459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed,PodSandboxId:5455715575c51f4d810e8dd821cea56ee41918c88406438ec16d70d83807bc9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696244432131810625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f3032f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ae459a9147bd9fc3d6d296258438e989fffa15ded215e27a512339b6e61fdd,PodSandboxId:e9ccdaa9bf45dc39e23a548058efdcb73d38cb6c8dcbb14953c6b556b43f0574,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696244432089570082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5dc279e-4d6b-46cc-a025-c16e06567666 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.655754406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bbb1b395-9bbe-45ba-88cc-34452a5d24f1 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.655936964Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bbb1b395-9bbe-45ba-88cc-34452a5d24f1 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.657486608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f43a30f2-1a2c-4fc2-ad33-8925d4eb4551 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.658139518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696244514658118543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f43a30f2-1a2c-4fc2-ad33-8925d4eb4551 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.659220804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=91d73a67-7c13-412e-ab40-016c3105781c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.659316088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=91d73a67-7c13-412e-ab40-016c3105781c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:01:54 multinode-224116 crio[715]: time="2023-10-02 11:01:54.659570878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0fdee759f007aa115011d598c7b33e3da5b87be669e0040644900bc87ab03add,PodSandboxId:e411f57ed3b6ef95020a05a8d09f31f4a0644d0567c75fec27a32b891843adfe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696244510782684009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f86f4dfd5dfa54ec5c1f483a59fb2fdc2b494f3dc28a64ffc189cb7343fe072,PodSandboxId:10ea55596ee7197edf14e577c6e51e8d3d0843a68e7b3b11b481bd914da6f197,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696244459104452640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8dbea602827ec736f5ff7c46bed7b8218dd15378d03d8e2346e21e24dba5a09,PodSandboxId:c0f4fe09159ccac3a0a5960f3eeea52b52813f2664c8514ad2e6975fc274b239,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696244458895354863,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c5aaee19d6b364450393011ecdfc09737a14e2486a67766e4944f5c9b0c8188,PodSandboxId:11cc155f2083295f75fb60f1e1994dd45fb76fb0d87733df70f6caadd9017f82,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696244456301510637,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b19893f54486b6484a1f9241a3982a2eae7fb262fb55925e4755e5ed4f6295c,PodSandboxId:9cb7b4dd451155348239eca1b2b365e3f3741b2fb97f6cac820e6f9731e74ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696244454114149323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1307a3a9921920520255ae163271d26f1d7c3e1d19b8664705f530daaacf8388,PodSandboxId:71bd8e8ec573e0ae184c8b3be1b517c34dd90e85d931a2fb923694f4c456dce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696244432575037860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b5e5c1ef88d8882fd253ffe07bf432d41efc126835dcb8254723cc53188864c,PodSandboxId:ee2a8d2beacd34e7ea59fab3e8d9584593b44396c6158706d88cac3dae2853f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696244432374051900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.container.ha
sh: fbfb7459,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed,PodSandboxId:5455715575c51f4d810e8dd821cea56ee41918c88406438ec16d70d83807bc9a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696244432131810625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f3032f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ae459a9147bd9fc3d6d296258438e989fffa15ded215e27a512339b6e61fdd,PodSandboxId:e9ccdaa9bf45dc39e23a548058efdcb73d38cb6c8dcbb14953c6b556b43f0574,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696244432089570082,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.
container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=91d73a67-7c13-412e-ab40-016c3105781c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0fdee759f007a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   e411f57ed3b6e       busybox-5bc68d56bd-h45vs
	0f86f4dfd5dfa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   10ea55596ee71       coredns-5dd5756b68-h6gbq
	a8dbea602827e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   c0f4fe09159cc       storage-provisioner
	1c5aaee19d6b3       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   11cc155f20832       kindnet-f7m28
	9b19893f54486       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      About a minute ago   Running             kube-proxy                0                   9cb7b4dd45115       kube-proxy-nshcj
	1307a3a992192       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      About a minute ago   Running             kube-scheduler            0                   71bd8e8ec573e       kube-scheduler-multinode-224116
	0b5e5c1ef88d8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   ee2a8d2beacd3       etcd-multinode-224116
	413ab1884fa2b       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      About a minute ago   Running             kube-apiserver            0                   5455715575c51       kube-apiserver-multinode-224116
	46ae459a9147b       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      About a minute ago   Running             kube-controller-manager   0                   e9ccdaa9bf45d       kube-controller-manager-multinode-224116
	
	* 
	* ==> coredns [0f86f4dfd5dfa54ec5c1f483a59fb2fdc2b494f3dc28a64ffc189cb7343fe072] <==
	* [INFO] 10.244.0.3:41434 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052169s
	[INFO] 10.244.1.2:52351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138526s
	[INFO] 10.244.1.2:60747 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001718739s
	[INFO] 10.244.1.2:43822 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103982s
	[INFO] 10.244.1.2:42903 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104507s
	[INFO] 10.244.1.2:43951 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001095229s
	[INFO] 10.244.1.2:33575 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075234s
	[INFO] 10.244.1.2:40044 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000300678s
	[INFO] 10.244.1.2:33524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000488807s
	[INFO] 10.244.0.3:48461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106161s
	[INFO] 10.244.0.3:44208 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113648s
	[INFO] 10.244.0.3:49345 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075379s
	[INFO] 10.244.0.3:38187 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082944s
	[INFO] 10.244.1.2:45258 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179223s
	[INFO] 10.244.1.2:49245 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172896s
	[INFO] 10.244.1.2:54501 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130428s
	[INFO] 10.244.1.2:48272 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110433s
	[INFO] 10.244.0.3:60495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000223857s
	[INFO] 10.244.0.3:51526 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150462s
	[INFO] 10.244.0.3:48408 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000276094s
	[INFO] 10.244.0.3:60662 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101748s
	[INFO] 10.244.1.2:38648 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000333563s
	[INFO] 10.244.1.2:37214 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085688s
	[INFO] 10.244.1.2:50257 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121914s
	[INFO] 10.244.1.2:44467 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099417s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-224116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-224116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=multinode-224116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_00_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:00:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-224116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:00:58 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:00:58 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:00:58 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:00:58 +0000   Mon, 02 Oct 2023 11:00:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-224116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 464a25fde409486a8499c1b4a9875d71
	  System UUID:                464a25fd-e409-486a-8499-c1b4a9875d71
	  Boot ID:                    92db713c-1497-46ca-b8c0-8c8949dc9c2c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h45vs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-h6gbq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-224116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kindnet-f7m28                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-224116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-multinode-224116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-nshcj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-224116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 75s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node multinode-224116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node multinode-224116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node multinode-224116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-224116 event: Registered Node multinode-224116 in Controller
	  Normal  NodeReady                56s   kubelet          Node multinode-224116 status is now: NodeReady
	
	
	Name:               multinode-224116-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-224116-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:01:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-224116-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:01:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:01:44 +0000   Mon, 02 Oct 2023 11:01:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:01:44 +0000   Mon, 02 Oct 2023 11:01:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:01:44 +0000   Mon, 02 Oct 2023 11:01:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:01:44 +0000   Mon, 02 Oct 2023 11:01:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    multinode-224116-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9faff2bd4d0e490eb7c3fbd314099eb6
	  System UUID:                9faff2bd-4d0e-490e-b7c3-fbd314099eb6
	  Boot ID:                    62386d71-74e9-40ca-ab53-69ca8dd92473
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-jjswt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-crtcw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-rdt77            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x5 over 22s)  kubelet          Node multinode-224116-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 22s)  kubelet          Node multinode-224116-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 22s)  kubelet          Node multinode-224116-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-224116-m02 event: Registered Node multinode-224116-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-224116-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct 2 10:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071871] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 2 11:00] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.382216] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154291] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.043348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.315151] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.108018] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.147822] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.115865] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.230824] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.677472] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +9.271103] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[ +20.551131] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [0b5e5c1ef88d8882fd253ffe07bf432d41efc126835dcb8254723cc53188864c] <==
	* {"level":"info","ts":"2023-10-02T11:00:34.495803Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T11:00:34.496043Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T11:00:34.496235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(18429775660708452854)"}
	{"level":"info","ts":"2023-10-02T11:00:34.496366Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","added-peer-id":"ffc3b7517aaad9f6","added-peer-peer-urls":["https://192.168.39.165:2380"]}
	{"level":"info","ts":"2023-10-02T11:00:34.673538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T11:00:34.673767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T11:00:34.673804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 1"}
	{"level":"info","ts":"2023-10-02T11:00:34.673935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:00:34.673964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2023-10-02T11:00:34.674066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T11:00:34.674095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2023-10-02T11:00:34.676167Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-224116 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:00:34.676309Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:00:34.678934Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:00:34.679152Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:00:34.68107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:00:34.681312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T11:00:34.681195Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:00:34.682925Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:00:34.683037Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:00:34.68306Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:00:34.684072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2023-10-02T11:00:53.674769Z","caller":"traceutil/trace.go:171","msg":"trace[1420637664] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"105.80269ms","start":"2023-10-02T11:00:53.568954Z","end":"2023-10-02T11:00:53.674757Z","steps":["trace[1420637664] 'process raft request'  (duration: 105.440653ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:01:32.612701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.916025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-02T11:01:32.612965Z","caller":"traceutil/trace.go:171","msg":"trace[1409699193] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:442; }","duration":"117.209912ms","start":"2023-10-02T11:01:32.495736Z","end":"2023-10-02T11:01:32.612946Z","steps":["trace[1409699193] 'range keys from in-memory index tree'  (duration: 116.737873ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:01:55 up 1 min,  0 users,  load average: 0.68, 0.30, 0.11
	Linux multinode-224116 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1c5aaee19d6b364450393011ecdfc09737a14e2486a67766e4944f5c9b0c8188] <==
	* I1002 11:00:57.059519       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 11:00:57.059675       1 main.go:107] hostIP = 192.168.39.165
	podIP = 192.168.39.165
	I1002 11:00:57.059989       1 main.go:116] setting mtu 1500 for CNI 
	I1002 11:00:57.060098       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 11:00:57.060137       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 11:00:57.747065       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:00:57.747167       1 main.go:227] handling current node
	I1002 11:01:07.764162       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:01:07.764216       1 main.go:227] handling current node
	I1002 11:01:17.776621       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:01:17.776765       1 main.go:227] handling current node
	I1002 11:01:27.785537       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:01:27.785585       1 main.go:227] handling current node
	I1002 11:01:37.799181       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:01:37.799427       1 main.go:227] handling current node
	I1002 11:01:37.799460       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:01:37.799481       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	I1002 11:01:37.799973       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.135 Flags: [] Table: 0} 
	I1002 11:01:47.805515       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:01:47.805642       1 main.go:227] handling current node
	I1002 11:01:47.805666       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:01:47.805683       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed] <==
	* I1002 11:00:36.773258       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1002 11:00:36.811448       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 11:00:36.818538       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 11:00:36.819636       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 11:00:36.819678       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 11:00:36.819726       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 11:00:36.820399       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 11:00:36.820760       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 11:00:36.822534       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 11:00:37.022109       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 11:00:37.623796       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 11:00:37.631277       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 11:00:37.631415       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 11:00:38.217634       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:00:38.259176       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 11:00:38.348391       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 11:00:38.357412       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.165]
	I1002 11:00:38.358326       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 11:00:38.363148       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 11:00:38.708437       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 11:00:39.869554       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 11:00:39.887589       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 11:00:39.914768       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 11:00:51.566148       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 11:00:52.214998       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [46ae459a9147bd9fc3d6d296258438e989fffa15ded215e27a512339b6e61fdd] <==
	* I1002 11:00:52.844995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="190.77µs"
	I1002 11:00:58.083895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="147.457µs"
	I1002 11:00:58.104433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.137µs"
	I1002 11:01:00.207229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.294µs"
	I1002 11:01:00.250542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.902983ms"
	I1002 11:01:00.250742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.002µs"
	I1002 11:01:01.714226       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1002 11:01:33.817134       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-224116-m02\" does not exist"
	I1002 11:01:33.828104       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-224116-m02" podCIDRs=["10.244.1.0/24"]
	I1002 11:01:33.845809       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-crtcw"
	I1002 11:01:33.857131       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rdt77"
	I1002 11:01:36.720236       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-224116-m02"
	I1002 11:01:36.720400       1 event.go:307] "Event occurred" object="multinode-224116-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-224116-m02 event: Registered Node multinode-224116-m02 in Controller"
	I1002 11:01:44.597589       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:01:47.260720       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1002 11:01:47.280023       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-jjswt"
	I1002 11:01:47.295446       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-h45vs"
	I1002 11:01:47.323014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.680908ms"
	I1002 11:01:47.343557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.523163ms"
	I1002 11:01:47.343870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="175.026µs"
	I1002 11:01:47.349059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="27.318µs"
	I1002 11:01:51.397620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.613295ms"
	I1002 11:01:51.398797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.565µs"
	I1002 11:01:51.424042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.730774ms"
	I1002 11:01:51.424131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.315µs"
	
	* 
	* ==> kube-proxy [9b19893f54486b6484a1f9241a3982a2eae7fb262fb55925e4755e5ed4f6295c] <==
	* I1002 11:00:54.290100       1 server_others.go:69] "Using iptables proxy"
	I1002 11:00:54.300241       1 node.go:141] Successfully retrieved node IP: 192.168.39.165
	I1002 11:00:54.345653       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:00:54.345700       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:00:54.353069       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:00:54.353145       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:00:54.353291       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:00:54.353332       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:00:54.357681       1 config.go:188] "Starting service config controller"
	I1002 11:00:54.357737       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:00:54.357789       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:00:54.357891       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:00:54.358467       1 config.go:315] "Starting node config controller"
	I1002 11:00:54.358505       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:00:54.458417       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:00:54.458480       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:00:54.458633       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1307a3a9921920520255ae163271d26f1d7c3e1d19b8664705f530daaacf8388] <==
	* W1002 11:00:36.762933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:00:36.762973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 11:00:36.763036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 11:00:36.763047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 11:00:37.708069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:00:37.708173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 11:00:37.723991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:00:37.724085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 11:00:37.732183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 11:00:37.732293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 11:00:37.751215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:00:37.751322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 11:00:37.786612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:00:37.786689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 11:00:37.823396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 11:00:37.823678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 11:00:37.860418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 11:00:37.860508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 11:00:37.982283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:00:37.982369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 11:00:38.012077       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:00:38.012170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 11:00:38.189051       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 11:00:38.189143       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 11:00:40.149196       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:00:07 UTC, ends at Mon 2023-10-02 11:01:55 UTC. --
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: I1002 11:00:52.270463    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3def928-5e43-4f7e-8ae2-3c0daafd0003-lib-modules\") pod \"kube-proxy-nshcj\" (UID: \"f3def928-5e43-4f7e-8ae2-3c0daafd0003\") " pod="kube-system/kube-proxy-nshcj"
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: I1002 11:00:52.270483    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3def928-5e43-4f7e-8ae2-3c0daafd0003-kube-proxy\") pod \"kube-proxy-nshcj\" (UID: \"f3def928-5e43-4f7e-8ae2-3c0daafd0003\") " pod="kube-system/kube-proxy-nshcj"
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: I1002 11:00:52.270505    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc1438f0-bd67-457d-9e7e-b8998a01b029-cni-cfg\") pod \"kindnet-f7m28\" (UID: \"dc1438f0-bd67-457d-9e7e-b8998a01b029\") " pod="kube-system/kindnet-f7m28"
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: I1002 11:00:52.270526    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3def928-5e43-4f7e-8ae2-3c0daafd0003-xtables-lock\") pod \"kube-proxy-nshcj\" (UID: \"f3def928-5e43-4f7e-8ae2-3c0daafd0003\") " pod="kube-system/kube-proxy-nshcj"
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: I1002 11:00:52.270544    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9th\" (UniqueName: \"kubernetes.io/projected/f3def928-5e43-4f7e-8ae2-3c0daafd0003-kube-api-access-2h9th\") pod \"kube-proxy-nshcj\" (UID: \"f3def928-5e43-4f7e-8ae2-3c0daafd0003\") " pod="kube-system/kube-proxy-nshcj"
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: W1002 11:00:52.270239    1261 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-224116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-224116' and this object
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: E1002 11:00:52.270580    1261 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-224116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-224116' and this object
	Oct 02 11:00:52 multinode-224116 kubelet[1261]: E1002 11:00:52.270922    1261 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-224116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-224116' and this object
	Oct 02 11:00:55 multinode-224116 kubelet[1261]: I1002 11:00:55.174251    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nshcj" podStartSLOduration=3.174215949 podCreationTimestamp="2023-10-02 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 11:00:55.174165361 +0000 UTC m=+15.335919313" watchObservedRunningTime="2023-10-02 11:00:55.174215949 +0000 UTC m=+15.335969900"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.047808    1261 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.083215    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f7m28" podStartSLOduration=6.083180344 podCreationTimestamp="2023-10-02 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 11:00:57.187806901 +0000 UTC m=+17.349560855" watchObservedRunningTime="2023-10-02 11:00:58.083180344 +0000 UTC m=+18.244934295"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.083435    1261 topology_manager.go:215] "Topology Admit Handler" podUID="49ee2f4a-1c73-4642-bd3b-678e6cb9ef55" podNamespace="kube-system" podName="coredns-5dd5756b68-h6gbq"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.094626    1261 topology_manager.go:215] "Topology Admit Handler" podUID="ea5da043-58ea-4918-836d-19655c55b885" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.111186    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2pqk\" (UniqueName: \"kubernetes.io/projected/49ee2f4a-1c73-4642-bd3b-678e6cb9ef55-kube-api-access-k2pqk\") pod \"coredns-5dd5756b68-h6gbq\" (UID: \"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55\") " pod="kube-system/coredns-5dd5756b68-h6gbq"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.111248    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea5da043-58ea-4918-836d-19655c55b885-tmp\") pod \"storage-provisioner\" (UID: \"ea5da043-58ea-4918-836d-19655c55b885\") " pod="kube-system/storage-provisioner"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.111272    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ee2f4a-1c73-4642-bd3b-678e6cb9ef55-config-volume\") pod \"coredns-5dd5756b68-h6gbq\" (UID: \"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55\") " pod="kube-system/coredns-5dd5756b68-h6gbq"
	Oct 02 11:00:58 multinode-224116 kubelet[1261]: I1002 11:00:58.111294    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj7ml\" (UniqueName: \"kubernetes.io/projected/ea5da043-58ea-4918-836d-19655c55b885-kube-api-access-xj7ml\") pod \"storage-provisioner\" (UID: \"ea5da043-58ea-4918-836d-19655c55b885\") " pod="kube-system/storage-provisioner"
	Oct 02 11:00:59 multinode-224116 kubelet[1261]: I1002 11:00:59.206054    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.206009246 podCreationTimestamp="2023-10-02 11:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 11:00:59.205453487 +0000 UTC m=+19.367207438" watchObservedRunningTime="2023-10-02 11:00:59.206009246 +0000 UTC m=+19.367763180"
	Oct 02 11:01:00 multinode-224116 kubelet[1261]: I1002 11:01:00.238430    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h6gbq" podStartSLOduration=8.23839519 podCreationTimestamp="2023-10-02 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 11:01:00.208744317 +0000 UTC m=+20.370498268" watchObservedRunningTime="2023-10-02 11:01:00.23839519 +0000 UTC m=+20.400149141"
	Oct 02 11:01:40 multinode-224116 kubelet[1261]: E1002 11:01:40.010924    1261 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 11:01:40 multinode-224116 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 11:01:40 multinode-224116 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 11:01:40 multinode-224116 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 11:01:47 multinode-224116 kubelet[1261]: I1002 11:01:47.304584    1261 topology_manager.go:215] "Topology Admit Handler" podUID="ed1e56c2-6848-4905-995d-46cecedcabe7" podNamespace="default" podName="busybox-5bc68d56bd-h45vs"
	Oct 02 11:01:47 multinode-224116 kubelet[1261]: I1002 11:01:47.393058    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5nh\" (UniqueName: \"kubernetes.io/projected/ed1e56c2-6848-4905-995d-46cecedcabe7-kube-api-access-gc5nh\") pod \"busybox-5bc68d56bd-h45vs\" (UID: \"ed1e56c2-6848-4905-995d-46cecedcabe7\") " pod="default/busybox-5bc68d56bd-h45vs"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-224116 -n multinode-224116
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-224116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (686.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224116
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-224116
E1002 11:04:04.538696  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:04:14.660496  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-224116: exit status 82 (2m0.882476398s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-224116"  ...
	* Stopping node "multinode-224116"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-224116" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224116 --wait=true -v=8 --alsologtostderr
E1002 11:05:27.583446  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:06:55.306243  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:09:04.535707  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:09:14.659723  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:10:37.707143  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:11:55.305883  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:13:18.355563  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:14:04.538532  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:14:14.660274  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224116 --wait=true -v=8 --alsologtostderr: (9m22.262246207s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224116
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-224116 -n multinode-224116
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-224116 logs -n 25: (1.632278148s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile638977480/001/cp-test_multinode-224116-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116:/home/docker/cp-test_multinode-224116-m02_multinode-224116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n multinode-224116 sudo cat                                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /home/docker/cp-test_multinode-224116-m02_multinode-224116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03:/home/docker/cp-test_multinode-224116-m02_multinode-224116-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n multinode-224116-m03 sudo cat                                   | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /home/docker/cp-test_multinode-224116-m02_multinode-224116-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp testdata/cp-test.txt                                                | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile638977480/001/cp-test_multinode-224116-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116:/home/docker/cp-test_multinode-224116-m03_multinode-224116.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n multinode-224116 sudo cat                                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /home/docker/cp-test_multinode-224116-m03_multinode-224116.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt                       | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m02:/home/docker/cp-test_multinode-224116-m03_multinode-224116-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n                                                                 | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | multinode-224116-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-224116 ssh -n multinode-224116-m02 sudo cat                                   | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	|         | /home/docker/cp-test_multinode-224116-m03_multinode-224116-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-224116 node stop m03                                                          | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:02 UTC |
	| node    | multinode-224116 node start                                                             | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:02 UTC | 02 Oct 23 11:03 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-224116                                                                | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:03 UTC |                     |
	| stop    | -p multinode-224116                                                                     | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:03 UTC |                     |
	| start   | -p multinode-224116                                                                     | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:05 UTC | 02 Oct 23 11:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-224116                                                                | multinode-224116 | jenkins | v1.31.2 | 02 Oct 23 11:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:05:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:05:24.037513  355913 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:05:24.037660  355913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:05:24.037671  355913 out.go:309] Setting ErrFile to fd 2...
	I1002 11:05:24.037679  355913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:05:24.037861  355913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:05:24.038463  355913 out.go:303] Setting JSON to false
	I1002 11:05:24.039494  355913 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6470,"bootTime":1696238254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:05:24.039572  355913 start.go:138] virtualization: kvm guest
	I1002 11:05:24.041978  355913 out.go:177] * [multinode-224116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:05:24.043807  355913 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:05:24.045217  355913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:05:24.043835  355913 notify.go:220] Checking for updates...
	I1002 11:05:24.046615  355913 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:05:24.048028  355913 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:05:24.049419  355913 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:05:24.050835  355913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:05:24.052623  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:05:24.052734  355913 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:05:24.053222  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:05:24.053267  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:05:24.068795  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I1002 11:05:24.069235  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:05:24.069855  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:05:24.069877  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:05:24.070276  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:05:24.070500  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:05:24.105867  355913 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:05:24.107309  355913 start.go:298] selected driver: kvm2
	I1002 11:05:24.107328  355913 start.go:902] validating driver "kvm2" against &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:05:24.107465  355913 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:05:24.107772  355913 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:05:24.107846  355913 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:05:24.122642  355913 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:05:24.123251  355913 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:05:24.123309  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:05:24.123317  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:05:24.123323  355913 start_flags.go:321] config:
	{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:05:24.123536  355913 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:05:24.126398  355913 out.go:177] * Starting control plane node multinode-224116 in cluster multinode-224116
	I1002 11:05:24.127930  355913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:05:24.127968  355913 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:05:24.127984  355913 cache.go:57] Caching tarball of preloaded images
	I1002 11:05:24.128076  355913 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:05:24.128090  355913 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:05:24.128201  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:05:24.128386  355913 start.go:365] acquiring machines lock for multinode-224116: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:05:24.128429  355913 start.go:369] acquired machines lock for "multinode-224116" in 24.351µs
	I1002 11:05:24.128443  355913 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:05:24.128450  355913 fix.go:54] fixHost starting: 
	I1002 11:05:24.128717  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:05:24.128739  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:05:24.142917  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I1002 11:05:24.143322  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:05:24.143770  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:05:24.143790  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:05:24.144101  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:05:24.144303  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:05:24.144470  355913 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:05:24.146081  355913 fix.go:102] recreateIfNeeded on multinode-224116: state=Running err=<nil>
	W1002 11:05:24.146100  355913 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:05:24.148119  355913 out.go:177] * Updating the running kvm2 "multinode-224116" VM ...
	I1002 11:05:24.149354  355913 machine.go:88] provisioning docker machine ...
	I1002 11:05:24.149372  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:05:24.149571  355913 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:05:24.149755  355913 buildroot.go:166] provisioning hostname "multinode-224116"
	I1002 11:05:24.149777  355913 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:05:24.149906  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:05:24.152256  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:05:24.152747  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:05:24.152787  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:05:24.152892  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:05:24.153041  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:05:24.153196  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:05:24.153325  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:05:24.153471  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:05:24.153857  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:05:24.153873  355913 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116 && echo "multinode-224116" | sudo tee /etc/hostname
	I1002 11:05:42.550701  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:05:48.630726  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:05:51.702637  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:05:57.782781  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:00.854658  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:06.934670  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:10.006624  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:16.086762  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:19.162546  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:25.238705  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:28.310672  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:34.390658  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:37.462679  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:43.542696  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:46.614709  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:52.694696  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:06:55.766658  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:01.846673  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:04.918642  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:10.998675  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:14.070613  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:20.150691  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:23.222683  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:29.302702  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:32.374621  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:38.454656  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:41.526652  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:47.606688  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:50.678635  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:56.758678  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:07:59.830633  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:05.910678  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:08.982610  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:15.062639  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:18.134664  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:24.214639  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:27.286592  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:33.366651  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:36.438618  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:42.518670  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:45.590592  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:51.670701  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:08:54.742634  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:00.822645  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:03.894692  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:09.974627  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:13.046614  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:19.126596  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:22.198683  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:28.278646  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:31.350633  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:37.430662  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:40.502636  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:46.582694  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:49.654678  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:55.734636  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:09:58.806653  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:10:04.886620  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:10:07.958587  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:10:14.038576  355913 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.165:22: connect: no route to host
	I1002 11:10:17.040694  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:10:17.040745  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:17.042792  355913 machine.go:91] provisioned docker machine in 4m52.893418292s
	I1002 11:10:17.042838  355913 fix.go:56] fixHost completed within 4m52.914388197s
	I1002 11:10:17.042847  355913 start.go:83] releasing machines lock for "multinode-224116", held for 4m52.914409827s
	W1002 11:10:17.042863  355913 start.go:688] error starting host: provision: host is not running
	W1002 11:10:17.042966  355913 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:10:17.042975  355913 start.go:703] Will try again in 5 seconds ...
	I1002 11:10:22.044962  355913 start.go:365] acquiring machines lock for multinode-224116: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:10:22.045105  355913 start.go:369] acquired machines lock for "multinode-224116" in 67.286µs
	I1002 11:10:22.045133  355913 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:10:22.045138  355913 fix.go:54] fixHost starting: 
	I1002 11:10:22.045468  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:10:22.045490  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:10:22.060956  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
	I1002 11:10:22.061468  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:10:22.061942  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:10:22.061968  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:10:22.062306  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:10:22.062531  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:22.062708  355913 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:10:22.064362  355913 fix.go:102] recreateIfNeeded on multinode-224116: state=Stopped err=<nil>
	I1002 11:10:22.064385  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	W1002 11:10:22.064548  355913 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:10:22.067251  355913 out.go:177] * Restarting existing kvm2 VM for "multinode-224116" ...
	I1002 11:10:22.068720  355913 main.go:141] libmachine: (multinode-224116) Calling .Start
	I1002 11:10:22.068890  355913 main.go:141] libmachine: (multinode-224116) Ensuring networks are active...
	I1002 11:10:22.069708  355913 main.go:141] libmachine: (multinode-224116) Ensuring network default is active
	I1002 11:10:22.070028  355913 main.go:141] libmachine: (multinode-224116) Ensuring network mk-multinode-224116 is active
	I1002 11:10:22.070335  355913 main.go:141] libmachine: (multinode-224116) Getting domain xml...
	I1002 11:10:22.071046  355913 main.go:141] libmachine: (multinode-224116) Creating domain...
	I1002 11:10:23.339689  355913 main.go:141] libmachine: (multinode-224116) Waiting to get IP...
	I1002 11:10:23.340754  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:23.341289  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:23.341421  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:23.341258  356729 retry.go:31] will retry after 225.185628ms: waiting for machine to come up
	I1002 11:10:23.567672  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:23.568113  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:23.568152  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:23.568046  356729 retry.go:31] will retry after 278.413321ms: waiting for machine to come up
	I1002 11:10:23.848556  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:23.848997  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:23.849032  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:23.848949  356729 retry.go:31] will retry after 306.190102ms: waiting for machine to come up
	I1002 11:10:24.156454  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:24.156952  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:24.156978  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:24.156895  356729 retry.go:31] will retry after 553.072332ms: waiting for machine to come up
	I1002 11:10:24.711375  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:24.711788  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:24.711812  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:24.711767  356729 retry.go:31] will retry after 637.933823ms: waiting for machine to come up
	I1002 11:10:25.351704  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:25.352175  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:25.352203  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:25.352102  356729 retry.go:31] will retry after 816.364728ms: waiting for machine to come up
	I1002 11:10:26.170162  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:26.170650  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:26.170670  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:26.170620  356729 retry.go:31] will retry after 999.10129ms: waiting for machine to come up
	I1002 11:10:27.170932  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:27.171367  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:27.171402  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:27.171315  356729 retry.go:31] will retry after 979.883987ms: waiting for machine to come up
	I1002 11:10:28.152651  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:28.153133  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:28.153183  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:28.153107  356729 retry.go:31] will retry after 1.84144555s: waiting for machine to come up
	I1002 11:10:29.997252  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:29.997765  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:29.997804  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:29.997719  356729 retry.go:31] will retry after 1.894719814s: waiting for machine to come up
	I1002 11:10:31.893981  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:31.894462  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:31.894500  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:31.894399  356729 retry.go:31] will retry after 1.915741872s: waiting for machine to come up
	I1002 11:10:33.812987  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:33.813472  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:33.813503  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:33.813416  356729 retry.go:31] will retry after 2.553553593s: waiting for machine to come up
	I1002 11:10:36.368152  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:36.368556  355913 main.go:141] libmachine: (multinode-224116) DBG | unable to find current IP address of domain multinode-224116 in network mk-multinode-224116
	I1002 11:10:36.368596  355913 main.go:141] libmachine: (multinode-224116) DBG | I1002 11:10:36.368497  356729 retry.go:31] will retry after 3.564011945s: waiting for machine to come up
	I1002 11:10:39.935467  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:39.935835  355913 main.go:141] libmachine: (multinode-224116) Found IP for machine: 192.168.39.165
	I1002 11:10:39.935861  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has current primary IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:39.935868  355913 main.go:141] libmachine: (multinode-224116) Reserving static IP address...
	I1002 11:10:39.936296  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "multinode-224116", mac: "52:54:00:85:8e:87", ip: "192.168.39.165"} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:39.936337  355913 main.go:141] libmachine: (multinode-224116) DBG | skip adding static IP to network mk-multinode-224116 - found existing host DHCP lease matching {name: "multinode-224116", mac: "52:54:00:85:8e:87", ip: "192.168.39.165"}
	I1002 11:10:39.936353  355913 main.go:141] libmachine: (multinode-224116) Reserved static IP address: 192.168.39.165
	I1002 11:10:39.936371  355913 main.go:141] libmachine: (multinode-224116) Waiting for SSH to be available...
	I1002 11:10:39.936393  355913 main.go:141] libmachine: (multinode-224116) DBG | Getting to WaitForSSH function...
	I1002 11:10:39.938278  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:39.938598  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:39.938628  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:39.938765  355913 main.go:141] libmachine: (multinode-224116) DBG | Using SSH client type: external
	I1002 11:10:39.938795  355913 main.go:141] libmachine: (multinode-224116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa (-rw-------)
	I1002 11:10:39.938834  355913 main.go:141] libmachine: (multinode-224116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:10:39.938854  355913 main.go:141] libmachine: (multinode-224116) DBG | About to run SSH command:
	I1002 11:10:39.938887  355913 main.go:141] libmachine: (multinode-224116) DBG | exit 0
	I1002 11:10:40.030014  355913 main.go:141] libmachine: (multinode-224116) DBG | SSH cmd err, output: <nil>: 
	I1002 11:10:40.030496  355913 main.go:141] libmachine: (multinode-224116) Calling .GetConfigRaw
	I1002 11:10:40.031133  355913 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:10:40.033352  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.033705  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.033741  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.034008  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:10:40.034211  355913 machine.go:88] provisioning docker machine ...
	I1002 11:10:40.034230  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:40.034483  355913 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:10:40.034673  355913 buildroot.go:166] provisioning hostname "multinode-224116"
	I1002 11:10:40.034692  355913 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:10:40.034847  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.036971  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.037265  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.037301  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.037410  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.037603  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.037766  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.037900  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.038072  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:10:40.038522  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:10:40.038537  355913 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116 && echo "multinode-224116" | sudo tee /etc/hostname
	I1002 11:10:40.167140  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-224116
	
	I1002 11:10:40.167177  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.169896  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.170291  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.170327  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.170523  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.170762  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.170925  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.171051  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.171208  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:10:40.171567  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:10:40.171592  355913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-224116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-224116/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-224116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:10:40.290910  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:10:40.290952  355913 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:10:40.291008  355913 buildroot.go:174] setting up certificates
	I1002 11:10:40.291023  355913 provision.go:83] configureAuth start
	I1002 11:10:40.291043  355913 main.go:141] libmachine: (multinode-224116) Calling .GetMachineName
	I1002 11:10:40.291349  355913 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:10:40.293961  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.294402  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.294436  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.294633  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.296540  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.296904  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.296934  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.297065  355913 provision.go:138] copyHostCerts
	I1002 11:10:40.297095  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:10:40.297141  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:10:40.297153  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:10:40.297226  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:10:40.297326  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:10:40.297355  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:10:40.297366  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:10:40.297406  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:10:40.297473  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:10:40.297499  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:10:40.297508  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:10:40.297545  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:10:40.297662  355913 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.multinode-224116 san=[192.168.39.165 192.168.39.165 localhost 127.0.0.1 minikube multinode-224116]
	I1002 11:10:40.367514  355913 provision.go:172] copyRemoteCerts
	I1002 11:10:40.367573  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:10:40.367597  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.370013  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.370312  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.370381  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.370522  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.370715  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.370870  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.370990  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:10:40.455346  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:10:40.455430  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:10:40.478501  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:10:40.478582  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 11:10:40.500468  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:10:40.500562  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:10:40.522964  355913 provision.go:86] duration metric: configureAuth took 231.919162ms
	I1002 11:10:40.523001  355913 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:10:40.523250  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:10:40.523344  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.525770  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.526160  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.526189  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.526429  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.526640  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.526788  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.526955  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.527111  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:10:40.527429  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:10:40.527444  355913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:10:40.839123  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:10:40.839150  355913 machine.go:91] provisioned docker machine in 804.926179ms
	I1002 11:10:40.839167  355913 start.go:300] post-start starting for "multinode-224116" (driver="kvm2")
	I1002 11:10:40.839177  355913 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:10:40.839198  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:40.839504  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:10:40.839535  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.842256  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.842603  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.842643  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.842798  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.842978  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.843177  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.843328  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:10:40.928597  355913 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:10:40.932926  355913 command_runner.go:130] > NAME=Buildroot
	I1002 11:10:40.932954  355913 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 11:10:40.932962  355913 command_runner.go:130] > ID=buildroot
	I1002 11:10:40.932970  355913 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 11:10:40.932978  355913 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 11:10:40.933022  355913 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:10:40.933039  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:10:40.933128  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:10:40.933226  355913 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:10:40.933240  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 11:10:40.933330  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:10:40.942318  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:10:40.964530  355913 start.go:303] post-start completed in 125.343624ms
	I1002 11:10:40.964560  355913 fix.go:56] fixHost completed within 18.919420703s
	I1002 11:10:40.964598  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:40.967360  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.967685  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:40.967721  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:40.967874  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:40.968077  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.968246  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:40.968387  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:40.968551  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:10:40.969029  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I1002 11:10:40.969047  355913 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:10:41.083217  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696245041.031324637
	
	I1002 11:10:41.083256  355913 fix.go:206] guest clock: 1696245041.031324637
	I1002 11:10:41.083267  355913 fix.go:219] Guest: 2023-10-02 11:10:41.031324637 +0000 UTC Remote: 2023-10-02 11:10:40.964565446 +0000 UTC m=+316.959247042 (delta=66.759191ms)
	I1002 11:10:41.083294  355913 fix.go:190] guest clock delta is within tolerance: 66.759191ms
	I1002 11:10:41.083300  355913 start.go:83] releasing machines lock for "multinode-224116", held for 19.038179799s
	I1002 11:10:41.083325  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:41.083615  355913 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:10:41.086255  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.086744  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:41.086778  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.086895  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:41.087417  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:41.087611  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:10:41.087689  355913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:10:41.087745  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:41.087853  355913 ssh_runner.go:195] Run: cat /version.json
	I1002 11:10:41.087885  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:10:41.090273  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.090512  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.090695  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:41.090722  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.090881  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:41.090956  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:41.090988  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:41.091119  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:41.091173  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:10:41.091300  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:41.091432  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:10:41.091426  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:10:41.091630  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:10:41.091769  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:10:41.195895  355913 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 11:10:41.195960  355913 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1002 11:10:41.196106  355913 ssh_runner.go:195] Run: systemctl --version
	I1002 11:10:41.201424  355913 command_runner.go:130] > systemd 247 (247)
	I1002 11:10:41.201467  355913 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1002 11:10:41.201694  355913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:10:41.341360  355913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:10:41.347236  355913 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 11:10:41.347433  355913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:10:41.347515  355913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:10:41.363061  355913 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1002 11:10:41.363093  355913 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:10:41.363126  355913 start.go:469] detecting cgroup driver to use...
	I1002 11:10:41.363176  355913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:10:41.377382  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:10:41.390529  355913 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:10:41.390583  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:10:41.405490  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:10:41.419488  355913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:10:41.529159  355913 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1002 11:10:41.529251  355913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:10:41.543311  355913 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 11:10:41.641337  355913 docker.go:213] disabling docker service ...
	I1002 11:10:41.641421  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:10:41.655701  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:10:41.667365  355913 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1002 11:10:41.667476  355913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:10:41.771220  355913 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 11:10:41.771312  355913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:10:41.876884  355913 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1002 11:10:41.876921  355913 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 11:10:41.876993  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:10:41.889620  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:10:41.906658  355913 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 11:10:41.906702  355913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:10:41.906759  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:10:41.915671  355913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:10:41.915736  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:10:41.924806  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:10:41.933670  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:10:41.942642  355913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:10:41.951862  355913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:10:41.959637  355913 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:10:41.959667  355913 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:10:41.959713  355913 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:10:41.972311  355913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:10:41.980341  355913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:10:42.078033  355913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:10:42.231549  355913 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:10:42.231630  355913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:10:42.238389  355913 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 11:10:42.238417  355913 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 11:10:42.238431  355913 command_runner.go:130] > Device: 16h/22d	Inode: 737         Links: 1
	I1002 11:10:42.238443  355913 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:10:42.238448  355913 command_runner.go:130] > Access: 2023-10-02 11:10:42.165172782 +0000
	I1002 11:10:42.238454  355913 command_runner.go:130] > Modify: 2023-10-02 11:10:42.165172782 +0000
	I1002 11:10:42.238462  355913 command_runner.go:130] > Change: 2023-10-02 11:10:42.165172782 +0000
	I1002 11:10:42.238466  355913 command_runner.go:130] >  Birth: -
	I1002 11:10:42.238917  355913 start.go:537] Will wait 60s for crictl version
	I1002 11:10:42.238980  355913 ssh_runner.go:195] Run: which crictl
	I1002 11:10:42.243271  355913 command_runner.go:130] > /usr/bin/crictl
	I1002 11:10:42.243587  355913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:10:42.280306  355913 command_runner.go:130] > Version:  0.1.0
	I1002 11:10:42.280327  355913 command_runner.go:130] > RuntimeName:  cri-o
	I1002 11:10:42.280471  355913 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1002 11:10:42.280617  355913 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 11:10:42.282558  355913 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:10:42.282636  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:10:42.329295  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:10:42.329321  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:10:42.329331  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:10:42.329338  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:10:42.329347  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:10:42.329362  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:10:42.329368  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:10:42.329375  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:10:42.329393  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:10:42.329410  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:10:42.329421  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:10:42.329440  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:10:42.330795  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:10:42.373334  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:10:42.373357  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:10:42.373363  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:10:42.373368  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:10:42.373374  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:10:42.373379  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:10:42.373383  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:10:42.373387  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:10:42.373397  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:10:42.373404  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:10:42.373408  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:10:42.373412  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:10:42.376881  355913 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:10:42.378008  355913 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:10:42.380740  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:42.381105  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:10:42.381133  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:10:42.381339  355913 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:10:42.385476  355913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:10:42.398245  355913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:10:42.398369  355913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:10:42.434150  355913 command_runner.go:130] > {
	I1002 11:10:42.434199  355913 command_runner.go:130] >   "images": [
	I1002 11:10:42.434206  355913 command_runner.go:130] >     {
	I1002 11:10:42.434218  355913 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1002 11:10:42.434227  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:42.434239  355913 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 11:10:42.434245  355913 command_runner.go:130] >       ],
	I1002 11:10:42.434252  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:42.434268  355913 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1002 11:10:42.434283  355913 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1002 11:10:42.434292  355913 command_runner.go:130] >       ],
	I1002 11:10:42.434299  355913 command_runner.go:130] >       "size": "750414",
	I1002 11:10:42.434309  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:42.434315  355913 command_runner.go:130] >         "value": "65535"
	I1002 11:10:42.434325  355913 command_runner.go:130] >       },
	I1002 11:10:42.434331  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:42.434345  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:42.434367  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:42.434374  355913 command_runner.go:130] >     }
	I1002 11:10:42.434382  355913 command_runner.go:130] >   ]
	I1002 11:10:42.434388  355913 command_runner.go:130] > }
	I1002 11:10:42.435802  355913 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:10:42.435877  355913 ssh_runner.go:195] Run: which lz4
	I1002 11:10:42.439654  355913 command_runner.go:130] > /usr/bin/lz4
	I1002 11:10:42.439924  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1002 11:10:42.440031  355913 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:10:42.444283  355913 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:10:42.444316  355913 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:10:42.444331  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:10:44.266396  355913 crio.go:444] Took 1.826403 seconds to copy over tarball
	I1002 11:10:44.266479  355913 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:10:47.117971  355913 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85145441s)
	I1002 11:10:47.118001  355913 crio.go:451] Took 2.851579 seconds to extract the tarball
	I1002 11:10:47.118014  355913 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:10:47.158470  355913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:10:47.205747  355913 command_runner.go:130] > {
	I1002 11:10:47.205768  355913 command_runner.go:130] >   "images": [
	I1002 11:10:47.205774  355913 command_runner.go:130] >     {
	I1002 11:10:47.205786  355913 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1002 11:10:47.205792  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.205800  355913 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 11:10:47.205805  355913 command_runner.go:130] >       ],
	I1002 11:10:47.205812  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.205824  355913 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 11:10:47.205843  355913 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1002 11:10:47.205854  355913 command_runner.go:130] >       ],
	I1002 11:10:47.205862  355913 command_runner.go:130] >       "size": "65258016",
	I1002 11:10:47.205870  355913 command_runner.go:130] >       "uid": null,
	I1002 11:10:47.205881  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.205894  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.205905  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.205912  355913 command_runner.go:130] >     },
	I1002 11:10:47.205921  355913 command_runner.go:130] >     {
	I1002 11:10:47.205933  355913 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 11:10:47.205944  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.205955  355913 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 11:10:47.205965  355913 command_runner.go:130] >       ],
	I1002 11:10:47.205975  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.205992  355913 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 11:10:47.206009  355913 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 11:10:47.206019  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206032  355913 command_runner.go:130] >       "size": "31470524",
	I1002 11:10:47.206046  355913 command_runner.go:130] >       "uid": null,
	I1002 11:10:47.206054  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206065  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206076  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206084  355913 command_runner.go:130] >     },
	I1002 11:10:47.206091  355913 command_runner.go:130] >     {
	I1002 11:10:47.206103  355913 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1002 11:10:47.206114  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.206126  355913 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 11:10:47.206133  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206141  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.206157  355913 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1002 11:10:47.206173  355913 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1002 11:10:47.206184  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206195  355913 command_runner.go:130] >       "size": "53621675",
	I1002 11:10:47.206206  355913 command_runner.go:130] >       "uid": null,
	I1002 11:10:47.206214  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206224  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206239  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206249  355913 command_runner.go:130] >     },
	I1002 11:10:47.206259  355913 command_runner.go:130] >     {
	I1002 11:10:47.206270  355913 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1002 11:10:47.206281  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.206293  355913 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 11:10:47.206303  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206311  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.206327  355913 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1002 11:10:47.206342  355913 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1002 11:10:47.206370  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206382  355913 command_runner.go:130] >       "size": "295456551",
	I1002 11:10:47.206391  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:47.206401  355913 command_runner.go:130] >         "value": "0"
	I1002 11:10:47.206411  355913 command_runner.go:130] >       },
	I1002 11:10:47.206419  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206430  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206441  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206453  355913 command_runner.go:130] >     },
	I1002 11:10:47.206462  355913 command_runner.go:130] >     {
	I1002 11:10:47.206473  355913 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1002 11:10:47.206484  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.206497  355913 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 11:10:47.206507  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206517  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.206533  355913 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1002 11:10:47.206549  355913 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 11:10:47.206558  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206565  355913 command_runner.go:130] >       "size": "127149008",
	I1002 11:10:47.206576  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:47.206586  355913 command_runner.go:130] >         "value": "0"
	I1002 11:10:47.206595  355913 command_runner.go:130] >       },
	I1002 11:10:47.206603  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206614  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206630  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206640  355913 command_runner.go:130] >     },
	I1002 11:10:47.206651  355913 command_runner.go:130] >     {
	I1002 11:10:47.206665  355913 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1002 11:10:47.206676  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.206690  355913 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 11:10:47.206700  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206709  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.206726  355913 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1002 11:10:47.206742  355913 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1002 11:10:47.206753  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206762  355913 command_runner.go:130] >       "size": "123171638",
	I1002 11:10:47.206772  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:47.206788  355913 command_runner.go:130] >         "value": "0"
	I1002 11:10:47.206798  355913 command_runner.go:130] >       },
	I1002 11:10:47.206806  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206814  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206824  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206834  355913 command_runner.go:130] >     },
	I1002 11:10:47.206841  355913 command_runner.go:130] >     {
	I1002 11:10:47.206860  355913 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1002 11:10:47.206871  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.206883  355913 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 11:10:47.206892  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206900  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.206916  355913 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1002 11:10:47.206932  355913 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1002 11:10:47.206942  355913 command_runner.go:130] >       ],
	I1002 11:10:47.206950  355913 command_runner.go:130] >       "size": "74687895",
	I1002 11:10:47.206961  355913 command_runner.go:130] >       "uid": null,
	I1002 11:10:47.206971  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.206979  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.206989  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.206997  355913 command_runner.go:130] >     },
	I1002 11:10:47.207007  355913 command_runner.go:130] >     {
	I1002 11:10:47.207019  355913 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1002 11:10:47.207030  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.207042  355913 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 11:10:47.207057  355913 command_runner.go:130] >       ],
	I1002 11:10:47.207068  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.207101  355913 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 11:10:47.207117  355913 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1002 11:10:47.207123  355913 command_runner.go:130] >       ],
	I1002 11:10:47.207131  355913 command_runner.go:130] >       "size": "61485878",
	I1002 11:10:47.207138  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:47.207147  355913 command_runner.go:130] >         "value": "0"
	I1002 11:10:47.207156  355913 command_runner.go:130] >       },
	I1002 11:10:47.207166  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.207177  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.207188  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.207197  355913 command_runner.go:130] >     },
	I1002 11:10:47.207204  355913 command_runner.go:130] >     {
	I1002 11:10:47.207218  355913 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1002 11:10:47.207226  355913 command_runner.go:130] >       "repoTags": [
	I1002 11:10:47.207237  355913 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 11:10:47.207247  355913 command_runner.go:130] >       ],
	I1002 11:10:47.207260  355913 command_runner.go:130] >       "repoDigests": [
	I1002 11:10:47.207276  355913 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1002 11:10:47.207291  355913 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1002 11:10:47.207300  355913 command_runner.go:130] >       ],
	I1002 11:10:47.207308  355913 command_runner.go:130] >       "size": "750414",
	I1002 11:10:47.207319  355913 command_runner.go:130] >       "uid": {
	I1002 11:10:47.207330  355913 command_runner.go:130] >         "value": "65535"
	I1002 11:10:47.207339  355913 command_runner.go:130] >       },
	I1002 11:10:47.207347  355913 command_runner.go:130] >       "username": "",
	I1002 11:10:47.207358  355913 command_runner.go:130] >       "spec": null,
	I1002 11:10:47.207368  355913 command_runner.go:130] >       "pinned": false
	I1002 11:10:47.207377  355913 command_runner.go:130] >     }
	I1002 11:10:47.207384  355913 command_runner.go:130] >   ]
	I1002 11:10:47.207393  355913 command_runner.go:130] > }
	I1002 11:10:47.207516  355913 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:10:47.207530  355913 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:10:47.207637  355913 ssh_runner.go:195] Run: crio config
	I1002 11:10:47.255737  355913 command_runner.go:130] ! time="2023-10-02 11:10:47.203536984Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1002 11:10:47.255768  355913 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 11:10:47.260970  355913 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 11:10:47.260994  355913 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 11:10:47.261003  355913 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 11:10:47.261008  355913 command_runner.go:130] > #
	I1002 11:10:47.261017  355913 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 11:10:47.261027  355913 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 11:10:47.261045  355913 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 11:10:47.261068  355913 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 11:10:47.261079  355913 command_runner.go:130] > # reload'.
	I1002 11:10:47.261094  355913 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 11:10:47.261113  355913 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 11:10:47.261134  355913 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 11:10:47.261147  355913 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 11:10:47.261154  355913 command_runner.go:130] > [crio]
	I1002 11:10:47.261166  355913 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 11:10:47.261178  355913 command_runner.go:130] > # containers images, in this directory.
	I1002 11:10:47.261190  355913 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1002 11:10:47.261209  355913 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 11:10:47.261221  355913 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1002 11:10:47.261236  355913 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 11:10:47.261250  355913 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 11:10:47.261260  355913 command_runner.go:130] > storage_driver = "overlay"
	I1002 11:10:47.261271  355913 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 11:10:47.261284  355913 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 11:10:47.261300  355913 command_runner.go:130] > storage_option = [
	I1002 11:10:47.261313  355913 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1002 11:10:47.261322  355913 command_runner.go:130] > ]
	I1002 11:10:47.261334  355913 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 11:10:47.261348  355913 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 11:10:47.261359  355913 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 11:10:47.261373  355913 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 11:10:47.261387  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 11:10:47.261398  355913 command_runner.go:130] > # always happen on a node reboot
	I1002 11:10:47.261407  355913 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 11:10:47.261420  355913 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 11:10:47.261439  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 11:10:47.261460  355913 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 11:10:47.261473  355913 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 11:10:47.261489  355913 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 11:10:47.261506  355913 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 11:10:47.261517  355913 command_runner.go:130] > # internal_wipe = true
	I1002 11:10:47.261530  355913 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 11:10:47.261547  355913 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 11:10:47.261560  355913 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 11:10:47.261574  355913 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 11:10:47.261588  355913 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 11:10:47.261597  355913 command_runner.go:130] > [crio.api]
	I1002 11:10:47.261607  355913 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 11:10:47.261618  355913 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 11:10:47.261630  355913 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 11:10:47.261639  355913 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 11:10:47.261653  355913 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 11:10:47.261666  355913 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 11:10:47.261677  355913 command_runner.go:130] > # stream_port = "0"
	I1002 11:10:47.261689  355913 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 11:10:47.261701  355913 command_runner.go:130] > # stream_enable_tls = false
	I1002 11:10:47.261714  355913 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 11:10:47.261726  355913 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 11:10:47.261738  355913 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 11:10:47.261752  355913 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 11:10:47.261766  355913 command_runner.go:130] > # minutes.
	I1002 11:10:47.261777  355913 command_runner.go:130] > # stream_tls_cert = ""
	I1002 11:10:47.261796  355913 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 11:10:47.261810  355913 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 11:10:47.261821  355913 command_runner.go:130] > # stream_tls_key = ""
	I1002 11:10:47.261832  355913 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 11:10:47.261846  355913 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 11:10:47.261857  355913 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 11:10:47.261868  355913 command_runner.go:130] > # stream_tls_ca = ""
	I1002 11:10:47.261881  355913 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:10:47.261893  355913 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1002 11:10:47.261906  355913 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:10:47.261917  355913 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1002 11:10:47.261954  355913 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 11:10:47.261967  355913 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 11:10:47.261974  355913 command_runner.go:130] > [crio.runtime]
	I1002 11:10:47.261986  355913 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 11:10:47.261999  355913 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 11:10:47.262013  355913 command_runner.go:130] > # "nofile=1024:2048"
	I1002 11:10:47.262027  355913 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 11:10:47.262037  355913 command_runner.go:130] > # default_ulimits = [
	I1002 11:10:47.262047  355913 command_runner.go:130] > # ]
	I1002 11:10:47.262059  355913 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 11:10:47.262069  355913 command_runner.go:130] > # no_pivot = false
	I1002 11:10:47.262083  355913 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 11:10:47.262097  355913 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 11:10:47.262109  355913 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 11:10:47.262120  355913 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 11:10:47.262137  355913 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 11:10:47.262152  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:10:47.262162  355913 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1002 11:10:47.262169  355913 command_runner.go:130] > # Cgroup setting for conmon
	I1002 11:10:47.262181  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 11:10:47.262191  355913 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 11:10:47.262201  355913 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 11:10:47.262210  355913 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 11:10:47.262229  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:10:47.262240  355913 command_runner.go:130] > conmon_env = [
	I1002 11:10:47.262251  355913 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1002 11:10:47.262260  355913 command_runner.go:130] > ]
	I1002 11:10:47.262270  355913 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 11:10:47.262283  355913 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 11:10:47.262296  355913 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 11:10:47.262303  355913 command_runner.go:130] > # default_env = [
	I1002 11:10:47.262313  355913 command_runner.go:130] > # ]
	I1002 11:10:47.262324  355913 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 11:10:47.262338  355913 command_runner.go:130] > # selinux = false
	I1002 11:10:47.262366  355913 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 11:10:47.262381  355913 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 11:10:47.262392  355913 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 11:10:47.262402  355913 command_runner.go:130] > # seccomp_profile = ""
	I1002 11:10:47.262415  355913 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 11:10:47.262429  355913 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 11:10:47.262443  355913 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 11:10:47.262458  355913 command_runner.go:130] > # which might increase security.
	I1002 11:10:47.262469  355913 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1002 11:10:47.262481  355913 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 11:10:47.262495  355913 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 11:10:47.262509  355913 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 11:10:47.262523  355913 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 11:10:47.262536  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:10:47.262547  355913 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 11:10:47.262561  355913 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 11:10:47.262571  355913 command_runner.go:130] > # the cgroup blockio controller.
	I1002 11:10:47.262582  355913 command_runner.go:130] > # blockio_config_file = ""
	I1002 11:10:47.262594  355913 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 11:10:47.262604  355913 command_runner.go:130] > # irqbalance daemon.
	I1002 11:10:47.262617  355913 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 11:10:47.262631  355913 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 11:10:47.262641  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:10:47.262651  355913 command_runner.go:130] > # rdt_config_file = ""
	I1002 11:10:47.262661  355913 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 11:10:47.262676  355913 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 11:10:47.262690  355913 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 11:10:47.262701  355913 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 11:10:47.262713  355913 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 11:10:47.262727  355913 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 11:10:47.262737  355913 command_runner.go:130] > # will be added.
	I1002 11:10:47.262751  355913 command_runner.go:130] > # default_capabilities = [
	I1002 11:10:47.262760  355913 command_runner.go:130] > # 	"CHOWN",
	I1002 11:10:47.262768  355913 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 11:10:47.262778  355913 command_runner.go:130] > # 	"FSETID",
	I1002 11:10:47.262787  355913 command_runner.go:130] > # 	"FOWNER",
	I1002 11:10:47.262797  355913 command_runner.go:130] > # 	"SETGID",
	I1002 11:10:47.262807  355913 command_runner.go:130] > # 	"SETUID",
	I1002 11:10:47.262815  355913 command_runner.go:130] > # 	"SETPCAP",
	I1002 11:10:47.262826  355913 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 11:10:47.262835  355913 command_runner.go:130] > # 	"KILL",
	I1002 11:10:47.262842  355913 command_runner.go:130] > # ]
	I1002 11:10:47.262856  355913 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 11:10:47.262873  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:10:47.262884  355913 command_runner.go:130] > # default_sysctls = [
	I1002 11:10:47.262893  355913 command_runner.go:130] > # ]
	I1002 11:10:47.262902  355913 command_runner.go:130] > # List of devices on the host that a
	I1002 11:10:47.262916  355913 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 11:10:47.262926  355913 command_runner.go:130] > # allowed_devices = [
	I1002 11:10:47.262934  355913 command_runner.go:130] > # 	"/dev/fuse",
	I1002 11:10:47.262941  355913 command_runner.go:130] > # ]
	I1002 11:10:47.262950  355913 command_runner.go:130] > # List of additional devices. specified as
	I1002 11:10:47.262966  355913 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 11:10:47.262979  355913 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 11:10:47.263029  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:10:47.263043  355913 command_runner.go:130] > # additional_devices = [
	I1002 11:10:47.263049  355913 command_runner.go:130] > # ]
	I1002 11:10:47.263058  355913 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 11:10:47.263069  355913 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 11:10:47.263077  355913 command_runner.go:130] > # 	"/etc/cdi",
	I1002 11:10:47.263088  355913 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 11:10:47.263101  355913 command_runner.go:130] > # ]
	I1002 11:10:47.263115  355913 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 11:10:47.263135  355913 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 11:10:47.263144  355913 command_runner.go:130] > # Defaults to false.
	I1002 11:10:47.263154  355913 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 11:10:47.263168  355913 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 11:10:47.263181  355913 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 11:10:47.263192  355913 command_runner.go:130] > # hooks_dir = [
	I1002 11:10:47.263203  355913 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 11:10:47.263211  355913 command_runner.go:130] > # ]
	I1002 11:10:47.263223  355913 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 11:10:47.263237  355913 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 11:10:47.263248  355913 command_runner.go:130] > # its default mounts from the following two files:
	I1002 11:10:47.263257  355913 command_runner.go:130] > #
	I1002 11:10:47.263269  355913 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 11:10:47.263283  355913 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 11:10:47.263296  355913 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 11:10:47.263305  355913 command_runner.go:130] > #
	I1002 11:10:47.263320  355913 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 11:10:47.263334  355913 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 11:10:47.263347  355913 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 11:10:47.263359  355913 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 11:10:47.263368  355913 command_runner.go:130] > #
	I1002 11:10:47.263376  355913 command_runner.go:130] > # default_mounts_file = ""
	I1002 11:10:47.263389  355913 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 11:10:47.263404  355913 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 11:10:47.263414  355913 command_runner.go:130] > pids_limit = 1024
	I1002 11:10:47.263427  355913 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 11:10:47.263441  355913 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 11:10:47.263455  355913 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 11:10:47.263472  355913 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 11:10:47.263483  355913 command_runner.go:130] > # log_size_max = -1
	I1002 11:10:47.263498  355913 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 11:10:47.263508  355913 command_runner.go:130] > # log_to_journald = false
	I1002 11:10:47.263519  355913 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 11:10:47.263531  355913 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 11:10:47.263562  355913 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 11:10:47.263583  355913 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 11:10:47.263597  355913 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 11:10:47.263606  355913 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 11:10:47.263617  355913 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 11:10:47.263626  355913 command_runner.go:130] > # read_only = false
	I1002 11:10:47.263638  355913 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 11:10:47.263651  355913 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 11:10:47.263662  355913 command_runner.go:130] > # live configuration reload.
	I1002 11:10:47.263671  355913 command_runner.go:130] > # log_level = "info"
	I1002 11:10:47.263685  355913 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 11:10:47.263697  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:10:47.263705  355913 command_runner.go:130] > # log_filter = ""
	I1002 11:10:47.263719  355913 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 11:10:47.263732  355913 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 11:10:47.263743  355913 command_runner.go:130] > # separated by comma.
	I1002 11:10:47.263753  355913 command_runner.go:130] > # uid_mappings = ""
	I1002 11:10:47.263767  355913 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 11:10:47.263784  355913 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 11:10:47.263795  355913 command_runner.go:130] > # separated by comma.
	I1002 11:10:47.263806  355913 command_runner.go:130] > # gid_mappings = ""
	I1002 11:10:47.263818  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 11:10:47.263832  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:10:47.263846  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:10:47.263857  355913 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 11:10:47.263868  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 11:10:47.263882  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:10:47.263896  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:10:47.263907  355913 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 11:10:47.263919  355913 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 11:10:47.263933  355913 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 11:10:47.263946  355913 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 11:10:47.263956  355913 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 11:10:47.263967  355913 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 11:10:47.263980  355913 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 11:10:47.263992  355913 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 11:10:47.264010  355913 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 11:10:47.264026  355913 command_runner.go:130] > drop_infra_ctr = false
	I1002 11:10:47.264041  355913 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 11:10:47.264053  355913 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 11:10:47.264067  355913 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 11:10:47.264078  355913 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 11:10:47.264089  355913 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 11:10:47.264101  355913 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 11:10:47.264112  355913 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 11:10:47.264129  355913 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 11:10:47.264140  355913 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1002 11:10:47.264155  355913 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 11:10:47.264170  355913 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 11:10:47.264184  355913 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 11:10:47.264195  355913 command_runner.go:130] > # default_runtime = "runc"
	I1002 11:10:47.264205  355913 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 11:10:47.264221  355913 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 11:10:47.264239  355913 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 11:10:47.264254  355913 command_runner.go:130] > # creation as a file is not desired either.
	I1002 11:10:47.264272  355913 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 11:10:47.264283  355913 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 11:10:47.264295  355913 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 11:10:47.264304  355913 command_runner.go:130] > # ]
	I1002 11:10:47.264316  355913 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 11:10:47.264330  355913 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 11:10:47.264344  355913 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 11:10:47.264362  355913 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 11:10:47.264371  355913 command_runner.go:130] > #
	I1002 11:10:47.264380  355913 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 11:10:47.264392  355913 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 11:10:47.264403  355913 command_runner.go:130] > #  runtime_type = "oci"
	I1002 11:10:47.264415  355913 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 11:10:47.264424  355913 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 11:10:47.264434  355913 command_runner.go:130] > #  allowed_annotations = []
	I1002 11:10:47.264441  355913 command_runner.go:130] > # Where:
	I1002 11:10:47.264454  355913 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 11:10:47.264472  355913 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 11:10:47.264487  355913 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 11:10:47.264501  355913 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 11:10:47.264511  355913 command_runner.go:130] > #   in $PATH.
	I1002 11:10:47.264525  355913 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 11:10:47.264536  355913 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 11:10:47.264550  355913 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 11:10:47.264559  355913 command_runner.go:130] > #   state.
	I1002 11:10:47.264572  355913 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 11:10:47.264586  355913 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 11:10:47.264600  355913 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 11:10:47.264618  355913 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 11:10:47.264632  355913 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 11:10:47.264651  355913 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 11:10:47.264663  355913 command_runner.go:130] > #   The currently recognized values are:
	I1002 11:10:47.264678  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 11:10:47.264693  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 11:10:47.264707  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 11:10:47.264721  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 11:10:47.264737  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 11:10:47.264752  355913 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 11:10:47.264765  355913 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 11:10:47.264780  355913 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 11:10:47.264792  355913 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 11:10:47.264804  355913 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 11:10:47.264813  355913 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1002 11:10:47.264823  355913 command_runner.go:130] > runtime_type = "oci"
	I1002 11:10:47.264834  355913 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 11:10:47.264843  355913 command_runner.go:130] > runtime_config_path = ""
	I1002 11:10:47.264854  355913 command_runner.go:130] > monitor_path = ""
	I1002 11:10:47.264863  355913 command_runner.go:130] > monitor_cgroup = ""
	I1002 11:10:47.264872  355913 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 11:10:47.264885  355913 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 11:10:47.264896  355913 command_runner.go:130] > # running containers
	I1002 11:10:47.264907  355913 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 11:10:47.264921  355913 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 11:10:47.265004  355913 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 11:10:47.265019  355913 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 11:10:47.265029  355913 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 11:10:47.265037  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 11:10:47.265049  355913 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 11:10:47.265061  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 11:10:47.265073  355913 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 11:10:47.265084  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 11:10:47.265099  355913 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 11:10:47.265111  355913 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 11:10:47.265130  355913 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 11:10:47.265146  355913 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 11:10:47.265166  355913 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 11:10:47.265180  355913 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 11:10:47.265198  355913 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 11:10:47.265215  355913 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 11:10:47.265228  355913 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 11:10:47.265243  355913 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 11:10:47.265256  355913 command_runner.go:130] > # Example:
	I1002 11:10:47.265268  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 11:10:47.265280  355913 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 11:10:47.265293  355913 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 11:10:47.265305  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 11:10:47.265315  355913 command_runner.go:130] > # cpuset = 0
	I1002 11:10:47.265326  355913 command_runner.go:130] > # cpushares = "0-1"
	I1002 11:10:47.265334  355913 command_runner.go:130] > # Where:
	I1002 11:10:47.265344  355913 command_runner.go:130] > # The workload name is workload-type.
	I1002 11:10:47.265359  355913 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 11:10:47.265372  355913 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 11:10:47.265385  355913 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 11:10:47.265402  355913 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 11:10:47.265416  355913 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 11:10:47.265425  355913 command_runner.go:130] > # 
	I1002 11:10:47.265437  355913 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 11:10:47.265445  355913 command_runner.go:130] > #
	I1002 11:10:47.265456  355913 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 11:10:47.265474  355913 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 11:10:47.265488  355913 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 11:10:47.265503  355913 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 11:10:47.265516  355913 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 11:10:47.265526  355913 command_runner.go:130] > [crio.image]
	I1002 11:10:47.265538  355913 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 11:10:47.265547  355913 command_runner.go:130] > # default_transport = "docker://"
	I1002 11:10:47.265568  355913 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 11:10:47.265583  355913 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:10:47.265593  355913 command_runner.go:130] > # global_auth_file = ""
	I1002 11:10:47.265606  355913 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 11:10:47.265621  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:10:47.265632  355913 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 11:10:47.265644  355913 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 11:10:47.265658  355913 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:10:47.265670  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:10:47.265681  355913 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 11:10:47.265692  355913 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 11:10:47.265708  355913 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 11:10:47.265720  355913 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 11:10:47.265734  355913 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 11:10:47.265744  355913 command_runner.go:130] > # pause_command = "/pause"
	I1002 11:10:47.265756  355913 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 11:10:47.265769  355913 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 11:10:47.265777  355913 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 11:10:47.265785  355913 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 11:10:47.265793  355913 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 11:10:47.265800  355913 command_runner.go:130] > # signature_policy = ""
	I1002 11:10:47.265809  355913 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 11:10:47.265819  355913 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 11:10:47.265827  355913 command_runner.go:130] > # changing them here.
	I1002 11:10:47.265835  355913 command_runner.go:130] > # insecure_registries = [
	I1002 11:10:47.265841  355913 command_runner.go:130] > # ]
	I1002 11:10:47.265857  355913 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 11:10:47.265867  355913 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 11:10:47.265874  355913 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 11:10:47.265888  355913 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 11:10:47.265896  355913 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 11:10:47.265905  355913 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 11:10:47.265912  355913 command_runner.go:130] > # CNI plugins.
	I1002 11:10:47.265918  355913 command_runner.go:130] > [crio.network]
	I1002 11:10:47.265928  355913 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 11:10:47.265940  355913 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 11:10:47.265947  355913 command_runner.go:130] > # cni_default_network = ""
	I1002 11:10:47.265957  355913 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 11:10:47.265964  355913 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 11:10:47.265977  355913 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 11:10:47.265983  355913 command_runner.go:130] > # plugin_dirs = [
	I1002 11:10:47.265989  355913 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 11:10:47.265995  355913 command_runner.go:130] > # ]
	I1002 11:10:47.266012  355913 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 11:10:47.266023  355913 command_runner.go:130] > [crio.metrics]
	I1002 11:10:47.266034  355913 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 11:10:47.266045  355913 command_runner.go:130] > enable_metrics = true
	I1002 11:10:47.266061  355913 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 11:10:47.266073  355913 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 11:10:47.266087  355913 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 11:10:47.266102  355913 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 11:10:47.266115  355913 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 11:10:47.266127  355913 command_runner.go:130] > # metrics_collectors = [
	I1002 11:10:47.266138  355913 command_runner.go:130] > # 	"operations",
	I1002 11:10:47.266148  355913 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 11:10:47.266159  355913 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 11:10:47.266170  355913 command_runner.go:130] > # 	"operations_errors",
	I1002 11:10:47.266180  355913 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 11:10:47.266191  355913 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 11:10:47.266200  355913 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 11:10:47.266208  355913 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 11:10:47.266222  355913 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 11:10:47.266234  355913 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 11:10:47.266244  355913 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 11:10:47.266255  355913 command_runner.go:130] > # 	"containers_oom_total",
	I1002 11:10:47.266270  355913 command_runner.go:130] > # 	"containers_oom",
	I1002 11:10:47.266281  355913 command_runner.go:130] > # 	"processes_defunct",
	I1002 11:10:47.266292  355913 command_runner.go:130] > # 	"operations_total",
	I1002 11:10:47.266302  355913 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 11:10:47.266313  355913 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 11:10:47.266324  355913 command_runner.go:130] > # 	"operations_errors_total",
	I1002 11:10:47.266333  355913 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 11:10:47.266345  355913 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 11:10:47.266365  355913 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 11:10:47.266374  355913 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 11:10:47.266385  355913 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 11:10:47.266397  355913 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 11:10:47.266404  355913 command_runner.go:130] > # ]
	I1002 11:10:47.266416  355913 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 11:10:47.266427  355913 command_runner.go:130] > # metrics_port = 9090
	I1002 11:10:47.266437  355913 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 11:10:47.266448  355913 command_runner.go:130] > # metrics_socket = ""
	I1002 11:10:47.266461  355913 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 11:10:47.266477  355913 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 11:10:47.266492  355913 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 11:10:47.266503  355913 command_runner.go:130] > # certificate on any modification event.
	I1002 11:10:47.266514  355913 command_runner.go:130] > # metrics_cert = ""
	I1002 11:10:47.266524  355913 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 11:10:47.266539  355913 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 11:10:47.266548  355913 command_runner.go:130] > # metrics_key = ""
	I1002 11:10:47.266561  355913 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 11:10:47.266571  355913 command_runner.go:130] > [crio.tracing]
	I1002 11:10:47.266584  355913 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 11:10:47.266595  355913 command_runner.go:130] > # enable_tracing = false
	I1002 11:10:47.266606  355913 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 11:10:47.266617  355913 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 11:10:47.266627  355913 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 11:10:47.266636  355913 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 11:10:47.266649  355913 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 11:10:47.266659  355913 command_runner.go:130] > [crio.stats]
	I1002 11:10:47.266673  355913 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 11:10:47.266690  355913 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 11:10:47.266701  355913 command_runner.go:130] > # stats_collection_period = 0
	I1002 11:10:47.266810  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:10:47.266821  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:10:47.266846  355913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:10:47.266904  355913 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-224116 NodeName:multinode-224116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:10:47.267087  355913 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-224116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:10:47.267210  355913 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-224116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:10:47.267292  355913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:10:47.276108  355913 command_runner.go:130] > kubeadm
	I1002 11:10:47.276125  355913 command_runner.go:130] > kubectl
	I1002 11:10:47.276132  355913 command_runner.go:130] > kubelet
	I1002 11:10:47.276400  355913 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:10:47.276458  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:10:47.284580  355913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1002 11:10:47.300322  355913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:10:47.315719  355913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1002 11:10:47.331993  355913 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1002 11:10:47.335855  355913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:10:47.348144  355913 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116 for IP: 192.168.39.165
	I1002 11:10:47.348182  355913 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:10:47.348372  355913 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:10:47.348424  355913 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:10:47.348535  355913 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key
	I1002 11:10:47.348591  355913 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key.6c00c800
	I1002 11:10:47.348633  355913 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key
	I1002 11:10:47.348647  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 11:10:47.348666  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 11:10:47.348678  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 11:10:47.348692  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 11:10:47.348708  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:10:47.348721  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:10:47.348733  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:10:47.348742  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:10:47.348796  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:10:47.348821  355913 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:10:47.348831  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:10:47.348855  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:10:47.348883  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:10:47.348904  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:10:47.348945  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:10:47.348987  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 11:10:47.349008  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:10:47.349026  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 11:10:47.349673  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:10:47.373412  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:10:47.395479  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:10:47.418474  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:10:47.447485  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:10:47.472672  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:10:47.497714  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:10:47.522299  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:10:47.547283  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:10:47.571557  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:10:47.596936  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:10:47.621895  355913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:10:47.638914  355913 ssh_runner.go:195] Run: openssl version
	I1002 11:10:47.645088  355913 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 11:10:47.645208  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:10:47.655704  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:10:47.660804  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:10:47.660877  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:10:47.660945  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:10:47.666591  355913 command_runner.go:130] > 3ec20f2e
	I1002 11:10:47.666881  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:10:47.676738  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:10:47.686929  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:10:47.691584  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:10:47.691964  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:10:47.692026  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:10:47.697672  355913 command_runner.go:130] > b5213941
	I1002 11:10:47.697920  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:10:47.708166  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:10:47.718084  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:10:47.722790  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:10:47.723044  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:10:47.723091  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:10:47.728487  355913 command_runner.go:130] > 51391683
	I1002 11:10:47.728733  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:10:47.738952  355913 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:10:47.743646  355913 command_runner.go:130] > ca.crt
	I1002 11:10:47.743666  355913 command_runner.go:130] > ca.key
	I1002 11:10:47.743679  355913 command_runner.go:130] > healthcheck-client.crt
	I1002 11:10:47.743686  355913 command_runner.go:130] > healthcheck-client.key
	I1002 11:10:47.743694  355913 command_runner.go:130] > peer.crt
	I1002 11:10:47.743701  355913 command_runner.go:130] > peer.key
	I1002 11:10:47.743711  355913 command_runner.go:130] > server.crt
	I1002 11:10:47.743717  355913 command_runner.go:130] > server.key
	I1002 11:10:47.743958  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:10:47.749535  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.749733  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:10:47.755504  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.755793  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:10:47.761879  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.762133  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:10:47.767956  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.768027  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:10:47.773938  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.774283  355913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:10:47.779781  355913 command_runner.go:130] > Certificate will not expire
	I1002 11:10:47.780016  355913 kubeadm.go:404] StartCluster: {Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:10:47.780163  355913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:10:47.780243  355913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:10:47.820403  355913 cri.go:89] found id: ""
	I1002 11:10:47.820485  355913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:10:47.830763  355913 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 11:10:47.830792  355913 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 11:10:47.830801  355913 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 11:10:47.830807  355913 command_runner.go:130] > member
	I1002 11:10:47.830877  355913 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:10:47.830908  355913 kubeadm.go:636] restartCluster start
	I1002 11:10:47.830969  355913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:10:47.840397  355913 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:47.841022  355913 kubeconfig.go:92] found "multinode-224116" server: "https://192.168.39.165:8443"
	I1002 11:10:47.841510  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:10:47.841787  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:10:47.842456  355913 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 11:10:47.842723  355913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:10:47.851199  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:47.851257  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:47.863504  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:47.863528  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:47.863585  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:47.874705  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:48.375395  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:48.375492  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:48.387525  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:48.875029  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:48.875127  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:48.887758  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:49.375330  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:49.375413  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:49.387983  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:49.875633  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:49.875722  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:49.887120  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:50.375826  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:50.375917  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:50.387054  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:50.875748  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:50.875856  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:50.886873  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:51.375560  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:51.375698  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:51.386848  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:51.875504  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:51.875585  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:51.886957  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:52.374910  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:52.375029  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:52.386093  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:52.875522  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:52.875647  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:52.886827  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:53.375444  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:53.375535  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:53.387084  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:53.875700  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:53.875793  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:53.886860  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:54.375759  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:54.375871  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:54.387178  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:54.875766  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:54.875869  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:54.887680  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:55.375193  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:55.375295  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:55.386430  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:55.875032  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:55.875129  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:55.886779  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:56.375401  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:56.375492  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:56.386715  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:56.875270  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:56.875360  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:56.886547  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:57.375585  355913 api_server.go:166] Checking apiserver status ...
	I1002 11:10:57.375658  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:10:57.386795  355913 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:10:57.851648  355913 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:10:57.851684  355913 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:10:57.851698  355913 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:10:57.851785  355913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:10:57.889336  355913 cri.go:89] found id: ""
	I1002 11:10:57.889428  355913 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:10:57.904655  355913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:10:57.915341  355913 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1002 11:10:57.915373  355913 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1002 11:10:57.915384  355913 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1002 11:10:57.915397  355913 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:10:57.915488  355913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:10:57.915548  355913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:10:57.924421  355913 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:10:57.924445  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:10:58.041689  355913 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:10:58.042314  355913 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 11:10:58.043018  355913 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 11:10:58.043635  355913 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:10:58.044443  355913 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1002 11:10:58.045153  355913 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:10:58.046884  355913 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1002 11:10:58.047783  355913 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1002 11:10:58.048642  355913 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:10:58.049503  355913 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:10:58.050316  355913 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:10:58.051350  355913 command_runner.go:130] > [certs] Using the existing "sa" key
	I1002 11:10:58.053034  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:10:58.761835  355913 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:10:58.761862  355913 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:10:58.761869  355913 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:10:58.761875  355913 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:10:58.761882  355913 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:10:58.761919  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:10:58.827665  355913 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:10:58.831589  355913 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:10:58.832071  355913 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 11:10:58.964795  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:10:59.045288  355913 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:10:59.045335  355913 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:10:59.048433  355913 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:10:59.049577  355913 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:10:59.053040  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:10:59.143480  355913 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:10:59.147753  355913 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:10:59.147851  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:10:59.163870  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:10:59.677350  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:00.177455  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:00.677957  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:01.177660  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:01.678119  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:01.705479  355913 command_runner.go:130] > 1115
	I1002 11:11:01.705526  355913 api_server.go:72] duration metric: took 2.557776972s to wait for apiserver process to appear ...
	I1002 11:11:01.705535  355913 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:11:01.705554  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:05.583847  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:11:05.583879  355913 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:11:05.583909  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:05.650241  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:11:05.650282  355913 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:11:06.151021  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:06.156761  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:11:06.156801  355913 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:11:06.651455  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:06.660660  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:11:06.660694  355913 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:11:07.150769  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:07.171124  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1002 11:11:07.171239  355913 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I1002 11:11:07.171249  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:07.171257  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:07.171263  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:07.199173  355913 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1002 11:11:07.199207  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:07.199224  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:07.199233  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:07.199241  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:07.199249  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:07.199259  355913 round_trippers.go:580]     Content-Length: 263
	I1002 11:11:07.199267  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:07 GMT
	I1002 11:11:07.199275  355913 round_trippers.go:580]     Audit-Id: 41845607-4b17-4479-b45c-07deccebf44c
	I1002 11:11:07.199304  355913 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1002 11:11:07.199409  355913 api_server.go:141] control plane version: v1.28.2
	I1002 11:11:07.199434  355913 api_server.go:131] duration metric: took 5.493891192s to wait for apiserver health ...
	I1002 11:11:07.199445  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:11:07.199453  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:11:07.201231  355913 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 11:11:07.202799  355913 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:11:07.219695  355913 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 11:11:07.219728  355913 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 11:11:07.219739  355913 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 11:11:07.219752  355913 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:11:07.219762  355913 command_runner.go:130] > Access: 2023-10-02 11:10:34.846172782 +0000
	I1002 11:11:07.219770  355913 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 11:11:07.219782  355913 command_runner.go:130] > Change: 2023-10-02 11:10:33.014172782 +0000
	I1002 11:11:07.219789  355913 command_runner.go:130] >  Birth: -
	I1002 11:11:07.219861  355913 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:11:07.219876  355913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:11:07.287239  355913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:11:08.401973  355913 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:11:08.402009  355913 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:11:08.402019  355913 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 11:11:08.402026  355913 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 11:11:08.402086  355913 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.114801432s)
	I1002 11:11:08.402150  355913 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:11:08.402295  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:08.402304  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.402312  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.402318  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.406387  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:08.406408  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.406418  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.406428  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.406435  355913 round_trippers.go:580]     Audit-Id: b250dcf0-0f3e-436b-aef4-1c0d32be0720
	I1002 11:11:08.406443  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.406451  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.406461  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.408496  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"800"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83215 chars]
	I1002 11:11:08.412440  355913 system_pods.go:59] 12 kube-system pods found
	I1002 11:11:08.412477  355913 system_pods.go:61] "coredns-5dd5756b68-h6gbq" [49ee2f4a-1c73-4642-bd3b-678e6cb9ef55] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:11:08.412487  355913 system_pods.go:61] "etcd-multinode-224116" [5accde9f-e62c-422f-aaa1-ddf4f8f0da05] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:11:08.412500  355913 system_pods.go:61] "kindnet-crtcw" [5db6eeb2-d639-49c6-a6d2-f8043567b6f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:08.412506  355913 system_pods.go:61] "kindnet-f7m28" [dc1438f0-bd67-457d-9e7e-b8998a01b029] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:08.412516  355913 system_pods.go:61] "kindnet-z2ps6" [069c01f2-f4f8-4dcf-922f-54693f17daed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:08.412525  355913 system_pods.go:61] "kube-apiserver-multinode-224116" [26841310-e8b5-409e-8915-888db5e257ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:11:08.412534  355913 system_pods.go:61] "kube-controller-manager-multinode-224116" [7d71d06a-a323-41ce-a7a4-c7d33880f9fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:11:08.412542  355913 system_pods.go:61] "kube-proxy-8tg2f" [dd300e3b-222c-43bb-9997-2d1bddbc8e94] Running
	I1002 11:11:08.412546  355913 system_pods.go:61] "kube-proxy-nshcj" [f3def928-5e43-4f7e-8ae2-3c0daafd0003] Running
	I1002 11:11:08.412551  355913 system_pods.go:61] "kube-proxy-rdt77" [96482fa7-e7e4-4375-b3b6-cc24f41d4bcf] Running
	I1002 11:11:08.412560  355913 system_pods.go:61] "kube-scheduler-multinode-224116" [66f95d23-f489-423f-9008-a7cf03a9ee55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:11:08.412567  355913 system_pods.go:61] "storage-provisioner" [ea5da043-58ea-4918-836d-19655c55b885] Running
	I1002 11:11:08.412573  355913 system_pods.go:74] duration metric: took 10.411374ms to wait for pod list to return data ...
	I1002 11:11:08.412580  355913 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:11:08.412637  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:11:08.412644  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.412651  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.412657  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.415129  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:08.415147  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.415156  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.415165  355913 round_trippers.go:580]     Audit-Id: 614186f7-0f3d-423b-9035-d38e86a09200
	I1002 11:11:08.415177  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.415186  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.415196  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.415206  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.415587  355913 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"800"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15252 chars]
	I1002 11:11:08.416303  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:08.416323  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:08.416356  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:08.416360  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:08.416364  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:08.416369  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:08.416373  355913 node_conditions.go:105] duration metric: took 3.787942ms to run NodePressure ...
	I1002 11:11:08.416391  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:11:08.568240  355913 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 11:11:08.624910  355913 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 11:11:08.626441  355913 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:11:08.626546  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1002 11:11:08.626554  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.626562  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.626567  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.631074  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:08.631094  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.631105  355913 round_trippers.go:580]     Audit-Id: d8accfd8-974e-4edb-8ad4-c3e8fcf5e5b6
	I1002 11:11:08.631112  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.631120  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.631129  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.631139  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.631148  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.631464  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"781","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1002 11:11:08.632448  355913 kubeadm.go:787] kubelet initialised
	I1002 11:11:08.632466  355913 kubeadm.go:788] duration metric: took 6.004679ms waiting for restarted kubelet to initialise ...
	I1002 11:11:08.632474  355913 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:11:08.632529  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:08.632536  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.632544  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.632550  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.635758  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:08.635780  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.635790  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.635799  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.635808  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.635813  355913 round_trippers.go:580]     Audit-Id: 23077f8b-b987-41b2-94ac-08d88b489279
	I1002 11:11:08.635820  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.635828  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.637253  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83215 chars]
	I1002 11:11:08.639687  355913 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:08.639770  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:08.639780  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.639788  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.639793  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.641679  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.641698  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.641706  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.641717  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.641737  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.641750  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.641759  355913 round_trippers.go:580]     Audit-Id: 2820ec13-195f-4e9b-a161-b9d6c228ed13
	I1002 11:11:08.641768  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.641909  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:08.642332  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:08.642346  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.642370  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.642381  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.644276  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.644291  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.644298  355913 round_trippers.go:580]     Audit-Id: eed35d69-0155-4d9e-8192-40407ab10965
	I1002 11:11:08.644307  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.644314  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.644322  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.644332  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.644341  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.644628  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:08.645007  355913 pod_ready.go:97] node "multinode-224116" hosting pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.645032  355913 pod_ready.go:81] duration metric: took 5.324851ms waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:08.645044  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.645056  355913 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:08.645114  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:11:08.645125  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.645136  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.645146  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.647231  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:08.647247  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.647253  355913 round_trippers.go:580]     Audit-Id: f8320559-42ee-471f-8f0a-9ca5f288d004
	I1002 11:11:08.647258  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.647263  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.647268  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.647275  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.647284  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.647558  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"781","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1002 11:11:08.647881  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:08.647892  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.647899  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.647905  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.649572  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.649593  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.649602  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.649610  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.649618  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.649626  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.649635  355913 round_trippers.go:580]     Audit-Id: 12770409-4ea9-447d-b042-8c3b6b59d5a8
	I1002 11:11:08.649645  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.649911  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:08.650180  355913 pod_ready.go:97] node "multinode-224116" hosting pod "etcd-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.650195  355913 pod_ready.go:81] duration metric: took 5.128796ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:08.650204  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "etcd-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.650217  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:08.650261  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:08.650271  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.650279  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.650291  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.652190  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.652205  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.652211  355913 round_trippers.go:580]     Audit-Id: a01fe6df-8b68-46e3-81ed-f480f3a856cf
	I1002 11:11:08.652217  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.652222  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.652227  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.652232  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.652237  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.652394  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:08.652748  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:08.652758  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.652765  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.652771  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.654629  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.654642  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.654648  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.654654  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.654660  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.654665  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.654674  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.654683  355913 round_trippers.go:580]     Audit-Id: 66bf9449-03e2-402a-8d22-23473a47794d
	I1002 11:11:08.654844  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:08.655101  355913 pod_ready.go:97] node "multinode-224116" hosting pod "kube-apiserver-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.655117  355913 pod_ready.go:81] duration metric: took 4.891685ms waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:08.655125  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "kube-apiserver-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.655132  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:08.655168  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:11:08.655175  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.655182  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.655190  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.657009  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:08.657028  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.657035  355913 round_trippers.go:580]     Audit-Id: ef68d3b1-f4c7-4219-bacc-52ff8ec68f7f
	I1002 11:11:08.657043  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.657052  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.657064  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.657073  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.657093  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.657279  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"775","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1002 11:11:08.803004  355913 request.go:629] Waited for 145.293766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:08.803107  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:08.803115  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:08.803127  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:08.803141  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:08.806037  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:08.806058  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:08.806065  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:08.806070  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:08.806075  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:08.806082  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:08.806090  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:08.806099  355913 round_trippers.go:580]     Audit-Id: 31dd9d0f-deb6-4ee2-8a3c-2010fb30e651
	I1002 11:11:08.806431  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:08.806875  355913 pod_ready.go:97] node "multinode-224116" hosting pod "kube-controller-manager-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.806900  355913 pod_ready.go:81] duration metric: took 151.760745ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:08.806914  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "kube-controller-manager-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:08.806930  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:09.003352  355913 request.go:629] Waited for 196.339506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:11:09.003436  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:11:09.003443  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:09.003456  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:09.003470  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:09.006188  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:09.006210  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:09.006216  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:09.006225  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:09.006234  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:09.006243  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:08 GMT
	I1002 11:11:09.006252  355913 round_trippers.go:580]     Audit-Id: ca49c4d1-7acf-4a20-9e50-f18aa501882f
	I1002 11:11:09.006262  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:09.006463  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"683","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:11:09.202410  355913 request.go:629] Waited for 195.342834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:11:09.202493  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:11:09.202500  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:09.202511  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:09.202521  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:09.205521  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:09.205544  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:09.205553  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:09.205559  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:09.205565  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:09.205573  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:09 GMT
	I1002 11:11:09.205582  355913 round_trippers.go:580]     Audit-Id: 75469c47-3c27-4cca-844a-c3aee0ef9b6f
	I1002 11:11:09.205592  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:09.205704  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"60156cb0-4b83-40ca-ab0d-93bdf316a64a","resourceVersion":"707","creationTimestamp":"2023-10-02T11:03:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:03:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1002 11:11:09.206002  355913 pod_ready.go:92] pod "kube-proxy-8tg2f" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:09.206020  355913 pod_ready.go:81] duration metric: took 399.081019ms waiting for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:09.206033  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:09.402335  355913 request.go:629] Waited for 196.230133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:11:09.402449  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:11:09.402458  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:09.402471  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:09.402487  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:09.405703  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:09.405728  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:09.405735  355913 round_trippers.go:580]     Audit-Id: d8a6061b-ca84-422e-b919-b334bbd47bae
	I1002 11:11:09.405741  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:09.405746  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:09.405751  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:09.405756  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:09.405761  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:09 GMT
	I1002 11:11:09.406011  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"800","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:11:09.602957  355913 request.go:629] Waited for 196.34466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:09.603052  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:09.603059  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:09.603083  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:09.603099  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:09.605877  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:09.605902  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:09.605912  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:09 GMT
	I1002 11:11:09.605921  355913 round_trippers.go:580]     Audit-Id: 75baebc6-4cc0-4cd4-8135-d280328e5675
	I1002 11:11:09.605930  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:09.605939  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:09.605952  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:09.605964  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:09.606157  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:09.606617  355913 pod_ready.go:97] node "multinode-224116" hosting pod "kube-proxy-nshcj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:09.606659  355913 pod_ready.go:81] duration metric: took 400.617407ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:09.606682  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "kube-proxy-nshcj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:09.606698  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:09.803127  355913 request.go:629] Waited for 196.34792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:11:09.803217  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:11:09.803228  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:09.803236  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:09.803244  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:09.806438  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:09.806468  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:09.806478  355913 round_trippers.go:580]     Audit-Id: 7086f059-af2e-4968-9577-f375df305324
	I1002 11:11:09.806485  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:09.806506  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:09.806553  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:09.806572  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:09.806585  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:09 GMT
	I1002 11:11:09.806849  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rdt77","generateName":"kube-proxy-","namespace":"kube-system","uid":"96482fa7-e7e4-4375-b3b6-cc24f41d4bcf","resourceVersion":"477","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:11:10.002746  355913 request.go:629] Waited for 195.407733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:11:10.002818  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:11:10.002823  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.002831  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.002837  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.005602  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:10.005625  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.005632  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.005638  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:09 GMT
	I1002 11:11:10.005643  355913 round_trippers.go:580]     Audit-Id: 49357ac1-01c6-4a29-a4ff-018664cfa74a
	I1002 11:11:10.005648  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.005653  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.005658  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.005848  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"711","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1002 11:11:10.006313  355913 pod_ready.go:92] pod "kube-proxy-rdt77" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:10.006337  355913 pod_ready.go:81] duration metric: took 399.62747ms waiting for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:10.006371  355913 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:10.202891  355913 request.go:629] Waited for 196.418523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:11:10.202970  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:11:10.202978  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.202992  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.203013  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.207701  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:10.207731  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.207742  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.207751  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.207759  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.207768  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.207775  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:10 GMT
	I1002 11:11:10.207784  355913 round_trippers.go:580]     Audit-Id: c649f787-4403-479b-8ef5-ccf871f4deeb
	I1002 11:11:10.207930  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"776","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1002 11:11:10.402845  355913 request.go:629] Waited for 194.369732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.402915  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.402922  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.402937  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.402946  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.405682  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:10.405709  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.405720  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.405730  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.405739  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.405748  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:10 GMT
	I1002 11:11:10.405759  355913 round_trippers.go:580]     Audit-Id: 396b9304-305d-40b6-b561-ef444819f0be
	I1002 11:11:10.405767  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.405997  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:10.406584  355913 pod_ready.go:97] node "multinode-224116" hosting pod "kube-scheduler-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:10.406616  355913 pod_ready.go:81] duration metric: took 400.2335ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	E1002 11:11:10.406633  355913 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-224116" hosting pod "kube-scheduler-multinode-224116" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-224116" has status "Ready":"False"
	I1002 11:11:10.406644  355913 pod_ready.go:38] duration metric: took 1.774160092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:11:10.406674  355913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:11:10.419385  355913 command_runner.go:130] > -16
	I1002 11:11:10.419774  355913 ops.go:34] apiserver oom_adj: -16
	I1002 11:11:10.419795  355913 kubeadm.go:640] restartCluster took 22.58887945s
	I1002 11:11:10.419807  355913 kubeadm.go:406] StartCluster complete in 22.63980942s
	I1002 11:11:10.419834  355913 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:11:10.419932  355913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:11:10.420503  355913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:11:10.420736  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:11:10.420918  355913 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:11:10.422855  355913 out.go:177] * Enabled addons: 
	I1002 11:11:10.421087  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:11:10.421128  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:11:10.424189  355913 addons.go:502] enable addons completed in 3.278518ms: enabled=[]
	I1002 11:11:10.424530  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:11:10.425007  355913 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:11:10.425024  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.425035  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.425046  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.427900  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:10.427918  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.427927  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.427933  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.427940  355913 round_trippers.go:580]     Content-Length: 291
	I1002 11:11:10.427948  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:10 GMT
	I1002 11:11:10.427960  355913 round_trippers.go:580]     Audit-Id: 182207f5-311d-463e-8e01-63575f703fa8
	I1002 11:11:10.427970  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.427981  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.428044  355913 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"801","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 11:11:10.428199  355913 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-224116" context rescaled to 1 replicas
	I1002 11:11:10.428228  355913 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:11:10.429728  355913 out.go:177] * Verifying Kubernetes components...
	I1002 11:11:10.431044  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:11:10.534657  355913 command_runner.go:130] > apiVersion: v1
	I1002 11:11:10.534678  355913 command_runner.go:130] > data:
	I1002 11:11:10.534683  355913 command_runner.go:130] >   Corefile: |
	I1002 11:11:10.534687  355913 command_runner.go:130] >     .:53 {
	I1002 11:11:10.534690  355913 command_runner.go:130] >         log
	I1002 11:11:10.534695  355913 command_runner.go:130] >         errors
	I1002 11:11:10.534699  355913 command_runner.go:130] >         health {
	I1002 11:11:10.534704  355913 command_runner.go:130] >            lameduck 5s
	I1002 11:11:10.534707  355913 command_runner.go:130] >         }
	I1002 11:11:10.534712  355913 command_runner.go:130] >         ready
	I1002 11:11:10.534717  355913 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 11:11:10.534721  355913 command_runner.go:130] >            pods insecure
	I1002 11:11:10.534727  355913 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 11:11:10.534734  355913 command_runner.go:130] >            ttl 30
	I1002 11:11:10.534742  355913 command_runner.go:130] >         }
	I1002 11:11:10.534749  355913 command_runner.go:130] >         prometheus :9153
	I1002 11:11:10.534755  355913 command_runner.go:130] >         hosts {
	I1002 11:11:10.534763  355913 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1002 11:11:10.534771  355913 command_runner.go:130] >            fallthrough
	I1002 11:11:10.534776  355913 command_runner.go:130] >         }
	I1002 11:11:10.534783  355913 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 11:11:10.534791  355913 command_runner.go:130] >            max_concurrent 1000
	I1002 11:11:10.534798  355913 command_runner.go:130] >         }
	I1002 11:11:10.534804  355913 command_runner.go:130] >         cache 30
	I1002 11:11:10.534812  355913 command_runner.go:130] >         loop
	I1002 11:11:10.534823  355913 command_runner.go:130] >         reload
	I1002 11:11:10.534830  355913 command_runner.go:130] >         loadbalance
	I1002 11:11:10.534836  355913 command_runner.go:130] >     }
	I1002 11:11:10.534843  355913 command_runner.go:130] > kind: ConfigMap
	I1002 11:11:10.534851  355913 command_runner.go:130] > metadata:
	I1002 11:11:10.534858  355913 command_runner.go:130] >   creationTimestamp: "2023-10-02T11:00:39Z"
	I1002 11:11:10.534869  355913 command_runner.go:130] >   name: coredns
	I1002 11:11:10.534876  355913 command_runner.go:130] >   namespace: kube-system
	I1002 11:11:10.534889  355913 command_runner.go:130] >   resourceVersion: "362"
	I1002 11:11:10.534896  355913 command_runner.go:130] >   uid: 97cf364f-a332-48e3-9bc9-5e6bec4b59c1
	I1002 11:11:10.534975  355913 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:11:10.535085  355913 node_ready.go:35] waiting up to 6m0s for node "multinode-224116" to be "Ready" ...
	I1002 11:11:10.602447  355913 request.go:629] Waited for 67.196228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.602518  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.602523  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.602531  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.602537  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.605254  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:10.605284  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.605296  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.605304  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.605322  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.605331  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.605341  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:10 GMT
	I1002 11:11:10.605349  355913 round_trippers.go:580]     Audit-Id: 180b2d58-76e6-4c65-8179-06f9f0b5c819
	I1002 11:11:10.605549  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:10.803355  355913 request.go:629] Waited for 197.372837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.803421  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:10.803426  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:10.803433  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:10.803440  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:10.807345  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:10.807363  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:10.807369  355913 round_trippers.go:580]     Audit-Id: f7578a74-f45d-41e5-9a5f-ee5e2818b144
	I1002 11:11:10.807375  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:10.807383  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:10.807392  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:10.807410  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:10.807418  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:10 GMT
	I1002 11:11:10.807758  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"723","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1002 11:11:11.308867  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:11.308892  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:11.308901  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:11.308907  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:11.316597  355913 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1002 11:11:11.316628  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:11.316638  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:11.316646  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:11.316654  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:11.316664  355913 round_trippers.go:580]     Audit-Id: cc547fc3-ce65-4cd9-9311-9ab62f0bc884
	I1002 11:11:11.316672  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:11.316682  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:11.316778  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:11.317082  355913 node_ready.go:49] node "multinode-224116" has status "Ready":"True"
	I1002 11:11:11.317097  355913 node_ready.go:38] duration metric: took 781.979608ms waiting for node "multinode-224116" to be "Ready" ...
	I1002 11:11:11.317106  355913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:11:11.317173  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:11.317181  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:11.317188  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:11.317194  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:11.325805  355913 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1002 11:11:11.325832  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:11.325839  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:11.325844  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:11.325850  355913 round_trippers.go:580]     Audit-Id: 0ccb7280-3b41-4a5c-bb5a-501594b92d3e
	I1002 11:11:11.325855  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:11.325860  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:11.325865  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:11.328280  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"829"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83215 chars]
	I1002 11:11:11.330878  355913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:11.403233  355913 request.go:629] Waited for 72.237754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:11.403303  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:11.403308  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:11.403318  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:11.403328  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:11.405866  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:11.405885  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:11.405892  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:11.405897  355913 round_trippers.go:580]     Audit-Id: 27b9868b-b28c-4a25-9997-336e234e01ca
	I1002 11:11:11.405903  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:11.405908  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:11.405913  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:11.405920  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:11.406083  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:11.602992  355913 request.go:629] Waited for 196.380239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:11.603082  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:11.603090  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:11.603101  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:11.603111  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:11.605970  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:11.605991  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:11.605999  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:11.606004  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:11.606009  355913 round_trippers.go:580]     Audit-Id: 7a558d53-64ad-4ae4-81d5-2c9c82872e6b
	I1002 11:11:11.606014  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:11.606019  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:11.606024  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:11.606304  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:11.803124  355913 request.go:629] Waited for 196.37476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:11.803209  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:11.803214  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:11.803222  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:11.803228  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:11.806186  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:11.806209  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:11.806215  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:11.806221  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:11.806226  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:11.806231  355913 round_trippers.go:580]     Audit-Id: 8443d377-7c1b-4166-8661-c1e8cfde62f9
	I1002 11:11:11.806238  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:11.806247  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:11.806870  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:12.002816  355913 request.go:629] Waited for 195.350177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:12.002893  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:12.002898  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:12.002917  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:12.002924  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:12.005446  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:12.005495  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:12.005506  355913 round_trippers.go:580]     Audit-Id: b847acbb-d8b6-469a-8844-0080ace2526d
	I1002 11:11:12.005512  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:12.005518  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:12.005523  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:12.005531  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:12.005539  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:11 GMT
	I1002 11:11:12.005745  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:12.506986  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:12.507010  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:12.507019  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:12.507025  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:12.509482  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:12.509506  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:12.509516  355913 round_trippers.go:580]     Audit-Id: 3c61dc3a-ae5c-41d7-87f5-726babdb603c
	I1002 11:11:12.509523  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:12.509531  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:12.509538  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:12.509547  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:12.509558  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:12 GMT
	I1002 11:11:12.509890  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:12.510535  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:12.510551  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:12.510561  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:12.510567  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:12.512761  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:12.512780  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:12.512789  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:12 GMT
	I1002 11:11:12.512797  355913 round_trippers.go:580]     Audit-Id: 0aa31eae-3958-42c3-ada5-c3e82a26fa38
	I1002 11:11:12.512806  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:12.512819  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:12.512831  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:12.512843  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:12.513043  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:13.006680  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:13.006711  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:13.006725  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:13.006733  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:13.009706  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:13.009726  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:13.009734  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:13.009745  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:13.009753  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:13.009761  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:13.009771  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:12 GMT
	I1002 11:11:13.009779  355913 round_trippers.go:580]     Audit-Id: 5a935c22-738f-4aee-8d36-b162bbd4c618
	I1002 11:11:13.010501  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:13.010946  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:13.010958  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:13.010965  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:13.010971  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:13.013125  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:13.013142  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:13.013151  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:13.013160  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:13.013169  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:13.013184  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:12 GMT
	I1002 11:11:13.013200  355913 round_trippers.go:580]     Audit-Id: d62364a7-2d6a-408c-9536-933ab584e353
	I1002 11:11:13.013213  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:13.013581  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:13.507260  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:13.507289  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:13.507303  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:13.507312  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:13.510047  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:13.510073  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:13.510083  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:13.510090  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:13 GMT
	I1002 11:11:13.510095  355913 round_trippers.go:580]     Audit-Id: f60d53f6-b36d-4d92-a561-bb8404f71063
	I1002 11:11:13.510101  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:13.510106  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:13.510113  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:13.510683  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:13.511372  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:13.511391  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:13.511403  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:13.511413  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:13.514123  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:13.514144  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:13.514154  355913 round_trippers.go:580]     Audit-Id: abac925e-2f6b-465e-9491-1f89f0390e83
	I1002 11:11:13.514163  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:13.514174  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:13.514186  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:13.514196  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:13.514208  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:13 GMT
	I1002 11:11:13.514345  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:13.514683  355913 pod_ready.go:102] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"False"
	I1002 11:11:14.007096  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:14.007131  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:14.007145  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:14.007155  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:14.010239  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:14.010265  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:14.010275  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:14.010282  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:13 GMT
	I1002 11:11:14.010289  355913 round_trippers.go:580]     Audit-Id: e783c0d3-13dc-4ead-bf89-804fef9aeb8f
	I1002 11:11:14.010297  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:14.010306  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:14.010315  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:14.010519  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:14.011386  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:14.011404  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:14.011415  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:14.011433  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:14.014414  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:14.014434  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:14.014443  355913 round_trippers.go:580]     Audit-Id: 50263402-07d5-4d38-b5d7-1eb265a97876
	I1002 11:11:14.014451  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:14.014458  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:14.014466  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:14.014473  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:14.014482  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:13 GMT
	I1002 11:11:14.014708  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:14.506889  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:14.506914  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:14.506923  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:14.506929  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:14.510022  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:14.510043  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:14.510050  355913 round_trippers.go:580]     Audit-Id: b7136696-a325-4518-bfa3-33ebcdcdf4d0
	I1002 11:11:14.510056  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:14.510061  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:14.510066  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:14.510071  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:14.510076  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:14 GMT
	I1002 11:11:14.510402  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:14.510978  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:14.510995  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:14.511006  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:14.511015  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:14.513761  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:14.513782  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:14.513790  355913 round_trippers.go:580]     Audit-Id: 2a806eb8-a672-4a13-b199-511001318327
	I1002 11:11:14.513797  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:14.513806  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:14.513816  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:14.513826  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:14.513831  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:14 GMT
	I1002 11:11:14.514017  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:15.006537  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:15.006561  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.006573  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.006582  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.009402  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.009424  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.009431  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.009437  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:14 GMT
	I1002 11:11:15.009442  355913 round_trippers.go:580]     Audit-Id: 7bb3be5d-3735-415e-af7c-561031fed07f
	I1002 11:11:15.009447  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.009455  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.009481  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.009760  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"783","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1002 11:11:15.010399  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:15.010414  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.010425  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.010435  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.012496  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.012518  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.012526  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.012532  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:14 GMT
	I1002 11:11:15.012539  355913 round_trippers.go:580]     Audit-Id: 8486c576-3146-49c0-952a-84a7c80e989e
	I1002 11:11:15.012547  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.012556  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.012565  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.012691  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:15.507412  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:11:15.507441  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.507451  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.507459  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.509959  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.509980  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.509987  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.509992  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.509997  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.510002  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.510007  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.510012  355913 round_trippers.go:580]     Audit-Id: dd1f0638-bbff-4ac4-bd42-0d1d1114990b
	I1002 11:11:15.510233  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1002 11:11:15.510755  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:15.510771  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.510780  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.510789  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.512927  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.512948  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.512958  355913 round_trippers.go:580]     Audit-Id: 5d4a7dc0-c15e-41cf-98de-a559b7947761
	I1002 11:11:15.512967  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.512976  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.512991  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.513004  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.513012  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.513134  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:15.513543  355913 pod_ready.go:92] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:15.513567  355913 pod_ready.go:81] duration metric: took 4.182665336s waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:15.513580  355913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:15.513654  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:11:15.513662  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.513669  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.513675  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.515815  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.515832  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.515842  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.515849  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.515857  355913 round_trippers.go:580]     Audit-Id: 7cc030ab-95d0-4a8b-91b3-e05f7701763a
	I1002 11:11:15.515864  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.515878  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.515890  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.516044  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"835","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1002 11:11:15.516482  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:15.516495  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.516503  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.516511  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.518248  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:15.518262  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.518271  355913 round_trippers.go:580]     Audit-Id: 376a2c3b-fb70-4056-95fc-399e8d0f2bc6
	I1002 11:11:15.518279  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.518289  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.518303  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.518312  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.518325  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.518712  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:15.519031  355913 pod_ready.go:92] pod "etcd-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:15.519048  355913 pod_ready.go:81] duration metric: took 5.4477ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:15.519070  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:15.519127  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:15.519136  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.519147  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.519158  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.521632  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.521647  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.521653  355913 round_trippers.go:580]     Audit-Id: b34a1161-32c4-4f34-a10f-3d87668ee824
	I1002 11:11:15.521658  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.521663  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.521668  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.521673  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.521679  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.521888  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:15.602445  355913 request.go:629] Waited for 80.185559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:15.602548  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:15.602557  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.602585  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.602599  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.605245  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:15.605268  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.605278  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.605285  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.605294  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.605303  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.605313  355913 round_trippers.go:580]     Audit-Id: 91b85ac9-5e81-4fbc-a668-1c67eb0529d0
	I1002 11:11:15.605322  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.605515  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:15.803313  355913 request.go:629] Waited for 197.357671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:15.803402  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:15.803412  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:15.803425  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:15.803439  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:15.807828  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:15.807850  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:15.807857  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:15.807862  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:15.807867  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:15.807872  355913 round_trippers.go:580]     Audit-Id: 915246aa-0232-4279-8a23-cc5d46e53b70
	I1002 11:11:15.807878  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:15.807883  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:15.808690  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:16.002431  355913 request.go:629] Waited for 193.291068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:16.002513  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:16.002521  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:16.002536  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:16.002549  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:16.007531  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:16.007564  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:16.007574  355913 round_trippers.go:580]     Audit-Id: b37b48e8-ac39-48e2-b495-bfd3dd7fc558
	I1002 11:11:16.007583  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:16.007591  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:16.007602  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:16.007613  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:16.007625  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:15 GMT
	I1002 11:11:16.007880  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:16.508977  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:16.509005  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:16.509020  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:16.509029  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:16.511592  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:16.511617  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:16.511627  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:16.511637  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:16.511647  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:16.511655  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:16 GMT
	I1002 11:11:16.511664  355913 round_trippers.go:580]     Audit-Id: d94c00cf-70f8-4641-9dd2-b27a8c9c7a4a
	I1002 11:11:16.511670  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:16.511804  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:16.512332  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:16.512348  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:16.512356  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:16.512362  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:16.514459  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:16.514482  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:16.514492  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:16 GMT
	I1002 11:11:16.514501  355913 round_trippers.go:580]     Audit-Id: 3eda157c-b3f1-425f-81ea-e009793499ce
	I1002 11:11:16.514510  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:16.514518  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:16.514530  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:16.514539  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:16.514827  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:17.008485  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:17.008515  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:17.008528  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:17.008537  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:17.011770  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:17.011793  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:17.011800  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:17.011806  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:16 GMT
	I1002 11:11:17.011811  355913 round_trippers.go:580]     Audit-Id: 60f70e6a-64a2-4ecc-b78a-d9de4eca081b
	I1002 11:11:17.011816  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:17.011821  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:17.011827  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:17.012403  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:17.012858  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:17.012870  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:17.012878  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:17.012883  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:17.015276  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:17.015301  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:17.015311  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:17.015320  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:17.015333  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:17.015346  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:17 GMT
	I1002 11:11:17.015359  355913 round_trippers.go:580]     Audit-Id: a59207bd-be4c-4e70-94fb-dfcca26cbd07
	I1002 11:11:17.015371  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:17.015649  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:17.508873  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:17.508901  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:17.508912  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:17.508920  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:17.511843  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:17.511870  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:17.511881  355913 round_trippers.go:580]     Audit-Id: 4bcde23d-e1dc-4347-8c06-b0929e979452
	I1002 11:11:17.511890  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:17.511898  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:17.511905  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:17.511913  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:17.511919  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:17 GMT
	I1002 11:11:17.512094  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:17.512553  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:17.512567  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:17.512578  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:17.512587  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:17.515124  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:17.515145  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:17.515155  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:17.515164  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:17.515171  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:17 GMT
	I1002 11:11:17.515179  355913 round_trippers.go:580]     Audit-Id: ae810f8a-5ed0-4b6e-8e83-91637e488b78
	I1002 11:11:17.515187  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:17.515195  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:17.515393  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:18.009456  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:18.009487  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:18.009500  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:18.009509  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:18.012211  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:18.012239  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:18.012251  355913 round_trippers.go:580]     Audit-Id: b32727f6-6577-4fd8-8488-7949d687bc73
	I1002 11:11:18.012259  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:18.012266  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:18.012275  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:18.012287  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:18.012299  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:18 GMT
	I1002 11:11:18.012454  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:18.012942  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:18.012960  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:18.012967  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:18.012973  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:18.015023  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:18.015043  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:18.015052  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:18.015067  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:18 GMT
	I1002 11:11:18.015087  355913 round_trippers.go:580]     Audit-Id: b3db9eb2-e636-49fc-9987-af06afa4dbd2
	I1002 11:11:18.015100  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:18.015113  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:18.015125  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:18.015355  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:18.015671  355913 pod_ready.go:102] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"False"
	I1002 11:11:18.508999  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:18.509021  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:18.509030  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:18.509037  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:18.511941  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:18.511975  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:18.511986  355913 round_trippers.go:580]     Audit-Id: 1d1840de-46cd-4a5b-8184-a9ce2c7d818d
	I1002 11:11:18.511995  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:18.512003  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:18.512020  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:18.512028  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:18.512036  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:18 GMT
	I1002 11:11:18.512321  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:18.512889  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:18.512908  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:18.512925  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:18.512946  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:18.515275  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:18.515298  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:18.515309  355913 round_trippers.go:580]     Audit-Id: b81db4ba-1774-4386-a3ec-172cb3deb54a
	I1002 11:11:18.515317  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:18.515326  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:18.515334  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:18.515343  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:18.515354  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:18 GMT
	I1002 11:11:18.515633  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:19.009364  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:19.009392  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:19.009400  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:19.009406  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:19.013220  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:19.013247  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:19.013256  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:19 GMT
	I1002 11:11:19.013264  355913 round_trippers.go:580]     Audit-Id: 8b2bf9be-67b1-4ab4-bf09-21c1146634cc
	I1002 11:11:19.013272  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:19.013279  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:19.013287  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:19.013295  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:19.013884  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:19.014313  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:19.014328  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:19.014338  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:19.014346  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:19.016603  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:19.016624  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:19.016633  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:19 GMT
	I1002 11:11:19.016642  355913 round_trippers.go:580]     Audit-Id: 324b6c32-41e6-40a5-a4bd-e3e09180d452
	I1002 11:11:19.016651  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:19.016660  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:19.016669  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:19.016678  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:19.016958  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:19.508983  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:19.509012  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:19.509023  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:19.509032  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:19.511843  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:19.511865  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:19.511872  355913 round_trippers.go:580]     Audit-Id: 9e2bff7b-f547-4c34-ba60-20071d1891dc
	I1002 11:11:19.511878  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:19.511887  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:19.511894  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:19.511903  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:19.511909  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:19 GMT
	I1002 11:11:19.512124  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"773","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1002 11:11:19.512630  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:19.512646  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:19.512656  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:19.512665  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:19.515049  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:19.515066  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:19.515075  355913 round_trippers.go:580]     Audit-Id: b4727414-728f-4020-a174-d1b004da8ce2
	I1002 11:11:19.515083  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:19.515092  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:19.515105  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:19.515111  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:19.515119  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:19 GMT
	I1002 11:11:19.515478  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:20.009217  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:11:20.009241  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.009249  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.009255  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.011773  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:20.011794  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.011801  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.011806  355913 round_trippers.go:580]     Audit-Id: a65a3813-0416-496e-9108-81541a1ca37d
	I1002 11:11:20.011811  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.011816  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.011821  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.011827  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.012574  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"862","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1002 11:11:20.013000  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:20.013012  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.013019  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.013026  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.017426  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:20.017445  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.017454  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.017462  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.017470  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.017480  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.017489  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.017498  355913 round_trippers.go:580]     Audit-Id: d4c59527-f1d0-4cda-8f77-0b472b84ea90
	I1002 11:11:20.017712  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:20.018140  355913 pod_ready.go:92] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:20.018162  355913 pod_ready.go:81] duration metric: took 4.499078367s waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.018176  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.018250  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:11:20.018260  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.018270  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.018279  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.023026  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:11:20.023044  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.023054  355913 round_trippers.go:580]     Audit-Id: a8410091-e0de-4af1-b12e-c22085fbfd43
	I1002 11:11:20.023062  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.023070  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.023079  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.023090  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.023100  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.023366  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"832","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1002 11:11:20.023784  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:20.023798  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.023805  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.023812  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.026032  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:20.026047  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.026057  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.026064  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.026071  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.026080  355913 round_trippers.go:580]     Audit-Id: 370be8d5-9196-4cbb-9708-6d508e79ced3
	I1002 11:11:20.026089  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.026102  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.026425  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:20.026752  355913 pod_ready.go:92] pod "kube-controller-manager-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:20.026767  355913 pod_ready.go:81] duration metric: took 8.57724ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.026778  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.026830  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:11:20.026838  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.026845  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.026851  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.028862  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:20.028877  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.028885  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.028892  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.028900  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.028909  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.028918  355913 round_trippers.go:580]     Audit-Id: 6e74170f-e6ea-4d17-a2fe-446ebe2a2553
	I1002 11:11:20.028933  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.029094  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"683","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:11:20.029462  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:11:20.029475  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.029481  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.029488  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.031341  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:20.031361  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.031370  355913 round_trippers.go:580]     Audit-Id: 2aaca2b5-e993-4c52-b1e4-02e2e6788c78
	I1002 11:11:20.031379  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.031387  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.031399  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.031413  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.031419  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.031593  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"60156cb0-4b83-40ca-ab0d-93bdf316a64a","resourceVersion":"707","creationTimestamp":"2023-10-02T11:03:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:03:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1002 11:11:20.031877  355913 pod_ready.go:92] pod "kube-proxy-8tg2f" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:20.031896  355913 pod_ready.go:81] duration metric: took 5.109561ms waiting for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.031908  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.203306  355913 request.go:629] Waited for 171.324355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:11:20.203391  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:11:20.203396  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.203404  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.203411  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.206279  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:20.206301  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.206312  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.206321  355913 round_trippers.go:580]     Audit-Id: 523cb564-68ef-495f-8f74-0f31f64f6c03
	I1002 11:11:20.206331  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.206340  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.206346  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.206361  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.207040  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"800","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:11:20.402965  355913 request.go:629] Waited for 195.351804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:20.403025  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:20.403030  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.403039  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.403046  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.405391  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:20.405414  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.405424  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.405432  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.405446  355913 round_trippers.go:580]     Audit-Id: 9cbe64fd-1abe-453b-9541-d2ccc723814a
	I1002 11:11:20.405455  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.405472  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.405481  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.405901  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:20.406298  355913 pod_ready.go:92] pod "kube-proxy-nshcj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:20.406316  355913 pod_ready.go:81] duration metric: took 374.397455ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.406325  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.602783  355913 request.go:629] Waited for 196.387215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:11:20.602853  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:11:20.602858  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.602869  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.602875  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.606275  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:20.606301  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.606310  355913 round_trippers.go:580]     Audit-Id: f9113d85-72ef-4fd4-9b32-77da63cc3172
	I1002 11:11:20.606319  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.606327  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.606336  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.606344  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.606381  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.606857  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rdt77","generateName":"kube-proxy-","namespace":"kube-system","uid":"96482fa7-e7e4-4375-b3b6-cc24f41d4bcf","resourceVersion":"477","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:11:20.802671  355913 request.go:629] Waited for 195.391755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:11:20.802755  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:11:20.802761  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:20.802770  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:20.802776  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:20.805814  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:20.805840  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:20.805851  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:20.805862  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:20.805871  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:20.805884  355913 round_trippers.go:580]     Audit-Id: 3c63cec5-bda4-4751-8617-75ba984ce959
	I1002 11:11:20.805891  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:20.805896  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:20.806413  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15","resourceVersion":"711","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1002 11:11:20.806683  355913 pod_ready.go:92] pod "kube-proxy-rdt77" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:20.806699  355913 pod_ready.go:81] duration metric: took 400.367163ms waiting for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:20.806708  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:21.003158  355913 request.go:629] Waited for 196.382903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:11:21.003242  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:11:21.003247  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.003255  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.003262  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.006677  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:21.006705  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.006716  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.006724  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.006736  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.006748  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.006762  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:20 GMT
	I1002 11:11:21.006771  355913 round_trippers.go:580]     Audit-Id: cda6f15f-4ed9-4cfd-a61e-c4246a09ddb6
	I1002 11:11:21.007410  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"834","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1002 11:11:21.203199  355913 request.go:629] Waited for 195.365566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:21.203262  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:11:21.203267  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.203275  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.203282  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.206196  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:21.206225  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.206234  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.206244  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:21.206252  355913 round_trippers.go:580]     Audit-Id: 8e6e3325-683c-4e35-a7c8-11c98fa0c2be
	I1002 11:11:21.206259  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.206268  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.206276  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.206548  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1002 11:11:21.206896  355913 pod_ready.go:92] pod "kube-scheduler-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:11:21.206913  355913 pod_ready.go:81] duration metric: took 400.199567ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:11:21.206923  355913 pod_ready.go:38] duration metric: took 9.88980774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:11:21.206942  355913 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:11:21.206989  355913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:11:21.223788  355913 command_runner.go:130] > 1115
	I1002 11:11:21.224146  355913 api_server.go:72] duration metric: took 10.795873152s to wait for apiserver process to appear ...
	I1002 11:11:21.224169  355913 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:11:21.224188  355913 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:11:21.232647  355913 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1002 11:11:21.232714  355913 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I1002 11:11:21.232721  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.232729  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.232737  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.234184  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:11:21.234201  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.234209  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:21.234215  355913 round_trippers.go:580]     Audit-Id: 1b0b0ca6-ea03-472a-bb04-50d59d35d851
	I1002 11:11:21.234221  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.234229  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.234234  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.234243  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.234249  355913 round_trippers.go:580]     Content-Length: 263
	I1002 11:11:21.234289  355913 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1002 11:11:21.234330  355913 api_server.go:141] control plane version: v1.28.2
	I1002 11:11:21.234344  355913 api_server.go:131] duration metric: took 10.169273ms to wait for apiserver health ...
	I1002 11:11:21.234360  355913 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:11:21.402785  355913 request.go:629] Waited for 168.344604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:21.402866  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:21.402872  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.402882  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.402893  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.408716  355913 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 11:11:21.408738  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.408745  355913 round_trippers.go:580]     Audit-Id: 734a5e9d-c5e4-406b-a9db-89dbb095fdbe
	I1002 11:11:21.408751  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.408756  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.408762  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.408773  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.408784  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:21.410988  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I1002 11:11:21.413400  355913 system_pods.go:59] 12 kube-system pods found
	I1002 11:11:21.413424  355913 system_pods.go:61] "coredns-5dd5756b68-h6gbq" [49ee2f4a-1c73-4642-bd3b-678e6cb9ef55] Running
	I1002 11:11:21.413431  355913 system_pods.go:61] "etcd-multinode-224116" [5accde9f-e62c-422f-aaa1-ddf4f8f0da05] Running
	I1002 11:11:21.413441  355913 system_pods.go:61] "kindnet-crtcw" [5db6eeb2-d639-49c6-a6d2-f8043567b6f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:21.413448  355913 system_pods.go:61] "kindnet-f7m28" [dc1438f0-bd67-457d-9e7e-b8998a01b029] Running
	I1002 11:11:21.413460  355913 system_pods.go:61] "kindnet-z2ps6" [069c01f2-f4f8-4dcf-922f-54693f17daed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:21.413472  355913 system_pods.go:61] "kube-apiserver-multinode-224116" [26841310-e8b5-409e-8915-888db5e257ab] Running
	I1002 11:11:21.413481  355913 system_pods.go:61] "kube-controller-manager-multinode-224116" [7d71d06a-a323-41ce-a7a4-c7d33880f9fa] Running
	I1002 11:11:21.413495  355913 system_pods.go:61] "kube-proxy-8tg2f" [dd300e3b-222c-43bb-9997-2d1bddbc8e94] Running
	I1002 11:11:21.413502  355913 system_pods.go:61] "kube-proxy-nshcj" [f3def928-5e43-4f7e-8ae2-3c0daafd0003] Running
	I1002 11:11:21.413508  355913 system_pods.go:61] "kube-proxy-rdt77" [96482fa7-e7e4-4375-b3b6-cc24f41d4bcf] Running
	I1002 11:11:21.413519  355913 system_pods.go:61] "kube-scheduler-multinode-224116" [66f95d23-f489-423f-9008-a7cf03a9ee55] Running
	I1002 11:11:21.413528  355913 system_pods.go:61] "storage-provisioner" [ea5da043-58ea-4918-836d-19655c55b885] Running
	I1002 11:11:21.413538  355913 system_pods.go:74] duration metric: took 179.170368ms to wait for pod list to return data ...
	I1002 11:11:21.413551  355913 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:11:21.603027  355913 request.go:629] Waited for 189.371742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I1002 11:11:21.603090  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I1002 11:11:21.603095  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.603103  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.603109  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.605889  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:11:21.605910  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.605917  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.605923  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.605928  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.605934  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.605940  355913 round_trippers.go:580]     Content-Length: 261
	I1002 11:11:21.605948  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:21.605954  355913 round_trippers.go:580]     Audit-Id: 4c728920-b18e-47fa-8c66-56560bdf03f9
	I1002 11:11:21.605985  355913 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1d1f48a9-6a1e-4e03-8f78-cde5f832a3a7","resourceVersion":"304","creationTimestamp":"2023-10-02T11:00:52Z"}}]}
	I1002 11:11:21.606157  355913 default_sa.go:45] found service account: "default"
	I1002 11:11:21.606172  355913 default_sa.go:55] duration metric: took 192.611059ms for default service account to be created ...
	I1002 11:11:21.606180  355913 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:11:21.802566  355913 request.go:629] Waited for 196.29696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:21.802641  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:11:21.802648  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:21.802659  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:21.802670  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:21.808031  355913 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 11:11:21.808060  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:21.808070  355913 round_trippers.go:580]     Audit-Id: 81cb6887-3467-463d-9444-acb7acbf0a7b
	I1002 11:11:21.808078  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:21.808098  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:21.808106  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:21.808115  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:21.808127  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:21.809671  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81886 chars]
	I1002 11:11:21.812160  355913 system_pods.go:86] 12 kube-system pods found
	I1002 11:11:21.812183  355913 system_pods.go:89] "coredns-5dd5756b68-h6gbq" [49ee2f4a-1c73-4642-bd3b-678e6cb9ef55] Running
	I1002 11:11:21.812189  355913 system_pods.go:89] "etcd-multinode-224116" [5accde9f-e62c-422f-aaa1-ddf4f8f0da05] Running
	I1002 11:11:21.812198  355913 system_pods.go:89] "kindnet-crtcw" [5db6eeb2-d639-49c6-a6d2-f8043567b6f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:21.812203  355913 system_pods.go:89] "kindnet-f7m28" [dc1438f0-bd67-457d-9e7e-b8998a01b029] Running
	I1002 11:11:21.812212  355913 system_pods.go:89] "kindnet-z2ps6" [069c01f2-f4f8-4dcf-922f-54693f17daed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 11:11:21.812218  355913 system_pods.go:89] "kube-apiserver-multinode-224116" [26841310-e8b5-409e-8915-888db5e257ab] Running
	I1002 11:11:21.812226  355913 system_pods.go:89] "kube-controller-manager-multinode-224116" [7d71d06a-a323-41ce-a7a4-c7d33880f9fa] Running
	I1002 11:11:21.812232  355913 system_pods.go:89] "kube-proxy-8tg2f" [dd300e3b-222c-43bb-9997-2d1bddbc8e94] Running
	I1002 11:11:21.812239  355913 system_pods.go:89] "kube-proxy-nshcj" [f3def928-5e43-4f7e-8ae2-3c0daafd0003] Running
	I1002 11:11:21.812243  355913 system_pods.go:89] "kube-proxy-rdt77" [96482fa7-e7e4-4375-b3b6-cc24f41d4bcf] Running
	I1002 11:11:21.812249  355913 system_pods.go:89] "kube-scheduler-multinode-224116" [66f95d23-f489-423f-9008-a7cf03a9ee55] Running
	I1002 11:11:21.812253  355913 system_pods.go:89] "storage-provisioner" [ea5da043-58ea-4918-836d-19655c55b885] Running
	I1002 11:11:21.812262  355913 system_pods.go:126] duration metric: took 206.077951ms to wait for k8s-apps to be running ...
	I1002 11:11:21.812272  355913 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:11:21.812319  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:11:21.828196  355913 system_svc.go:56] duration metric: took 15.913616ms WaitForService to wait for kubelet.
	I1002 11:11:21.828227  355913 kubeadm.go:581] duration metric: took 11.39997538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:11:21.828248  355913 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:11:22.002709  355913 request.go:629] Waited for 174.373026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I1002 11:11:22.002794  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:11:22.002800  355913 round_trippers.go:469] Request Headers:
	I1002 11:11:22.002808  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:11:22.002814  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:11:22.005960  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:11:22.005997  355913 round_trippers.go:577] Response Headers:
	I1002 11:11:22.006006  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:11:22.006039  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:11:21 GMT
	I1002 11:11:22.006052  355913 round_trippers.go:580]     Audit-Id: e1d6e731-add2-4ba7-836e-0eadad0901ed
	I1002 11:11:22.006063  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:11:22.006074  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:11:22.006085  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:11:22.006509  355913 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"862"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"829","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I1002 11:11:22.007082  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:22.007102  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:22.007115  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:22.007121  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:22.007126  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:11:22.007133  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:11:22.007141  355913 node_conditions.go:105] duration metric: took 178.887128ms to run NodePressure ...
	I1002 11:11:22.007157  355913 start.go:228] waiting for startup goroutines ...
	I1002 11:11:22.007172  355913 start.go:233] waiting for cluster config update ...
	I1002 11:11:22.007182  355913 start.go:242] writing updated cluster config ...
	I1002 11:11:22.007629  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:11:22.007738  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:11:22.011404  355913 out.go:177] * Starting worker node multinode-224116-m02 in cluster multinode-224116
	I1002 11:11:22.012866  355913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:11:22.012888  355913 cache.go:57] Caching tarball of preloaded images
	I1002 11:11:22.013012  355913 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:11:22.013025  355913 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:11:22.013128  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:11:22.013304  355913 start.go:365] acquiring machines lock for multinode-224116-m02: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:11:22.013367  355913 start.go:369] acquired machines lock for "multinode-224116-m02" in 40.565µs
	I1002 11:11:22.013387  355913 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:11:22.013397  355913 fix.go:54] fixHost starting: m02
	I1002 11:11:22.013670  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:11:22.013709  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:11:22.028097  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I1002 11:11:22.028760  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:11:22.029242  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:11:22.029265  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:11:22.029693  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:11:22.029884  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:11:22.030088  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetState
	I1002 11:11:22.031680  355913 fix.go:102] recreateIfNeeded on multinode-224116-m02: state=Running err=<nil>
	W1002 11:11:22.031697  355913 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:11:22.033724  355913 out.go:177] * Updating the running kvm2 "multinode-224116-m02" VM ...
	I1002 11:11:22.035354  355913 machine.go:88] provisioning docker machine ...
	I1002 11:11:22.035375  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:11:22.035561  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:11:22.035730  355913 buildroot.go:166] provisioning hostname "multinode-224116-m02"
	I1002 11:11:22.035747  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:11:22.035909  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:11:22.038283  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.038767  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.038800  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.038907  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:11:22.039067  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.039236  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.039355  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:11:22.039535  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:11:22.039849  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:11:22.039863  355913 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116-m02 && echo "multinode-224116-m02" | sudo tee /etc/hostname
	I1002 11:11:22.184860  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-224116-m02
	
	I1002 11:11:22.184893  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:11:22.187865  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.188329  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.188362  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.188596  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:11:22.188806  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.188961  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.189138  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:11:22.189324  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:11:22.189637  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:11:22.189654  355913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-224116-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-224116-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-224116-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:11:22.320051  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:11:22.320097  355913 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:11:22.320119  355913 buildroot.go:174] setting up certificates
	I1002 11:11:22.320133  355913 provision.go:83] configureAuth start
	I1002 11:11:22.320152  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetMachineName
	I1002 11:11:22.320448  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:11:22.323068  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.323455  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.323494  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.323588  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:11:22.326051  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.326443  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.326473  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.326668  355913 provision.go:138] copyHostCerts
	I1002 11:11:22.326706  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:11:22.326740  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:11:22.326749  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:11:22.326815  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:11:22.326885  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:11:22.326902  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:11:22.326909  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:11:22.326932  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:11:22.326972  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:11:22.326991  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:11:22.326997  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:11:22.327017  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:11:22.327059  355913 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.multinode-224116-m02 san=[192.168.39.135 192.168.39.135 localhost 127.0.0.1 minikube multinode-224116-m02]
	I1002 11:11:22.423376  355913 provision.go:172] copyRemoteCerts
	I1002 11:11:22.423434  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:11:22.423460  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:11:22.426057  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.426521  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.426574  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.426734  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:11:22.426943  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.427100  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:11:22.427229  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:11:22.520426  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:11:22.520503  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:11:22.542431  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:11:22.542497  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 11:11:22.564829  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:11:22.564906  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:11:22.586759  355913 provision.go:86] duration metric: configureAuth took 266.607664ms
	I1002 11:11:22.586793  355913 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:11:22.587076  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:11:22.587154  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:11:22.589790  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.590194  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:11:22.590241  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:11:22.590403  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:11:22.590642  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.590857  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:11:22.591036  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:11:22.591262  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:11:22.591608  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:11:22.591626  355913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:12:53.096812  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:12:53.096847  355913 machine.go:91] provisioned docker machine in 1m31.061475927s
	I1002 11:12:53.096860  355913 start.go:300] post-start starting for "multinode-224116-m02" (driver="kvm2")
	I1002 11:12:53.096871  355913 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:12:53.096890  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:12:53.097201  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:12:53.097227  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:12:53.099928  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.100287  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:53.100320  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.100543  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:12:53.100792  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:12:53.100963  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:12:53.101094  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:12:53.197253  355913 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:12:53.201392  355913 command_runner.go:130] > NAME=Buildroot
	I1002 11:12:53.201412  355913 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 11:12:53.201417  355913 command_runner.go:130] > ID=buildroot
	I1002 11:12:53.201423  355913 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 11:12:53.201428  355913 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 11:12:53.201737  355913 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:12:53.201758  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:12:53.201840  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:12:53.201936  355913 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:12:53.201950  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 11:12:53.202051  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:12:53.211008  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:12:53.233295  355913 start.go:303] post-start completed in 136.400888ms
	I1002 11:12:53.233317  355913 fix.go:56] fixHost completed within 1m31.219921915s
	I1002 11:12:53.233340  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:12:53.236191  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.236596  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:53.236630  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.236747  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:12:53.236963  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:12:53.237134  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:12:53.237307  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:12:53.237513  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:12:53.237821  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1002 11:12:53.237832  355913 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:12:53.367084  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696245173.361096528
	
	I1002 11:12:53.367115  355913 fix.go:206] guest clock: 1696245173.361096528
	I1002 11:12:53.367135  355913 fix.go:219] Guest: 2023-10-02 11:12:53.361096528 +0000 UTC Remote: 2023-10-02 11:12:53.233321842 +0000 UTC m=+449.228003447 (delta=127.774686ms)
	I1002 11:12:53.367158  355913 fix.go:190] guest clock delta is within tolerance: 127.774686ms
	I1002 11:12:53.367164  355913 start.go:83] releasing machines lock for "multinode-224116-m02", held for 1m31.353786158s
	I1002 11:12:53.367196  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:12:53.367489  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:12:53.370311  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.370745  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:53.370801  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.372761  355913 out.go:177] * Found network options:
	I1002 11:12:53.374154  355913 out.go:177]   - NO_PROXY=192.168.39.165
	W1002 11:12:53.375426  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:12:53.375467  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:12:53.375983  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:12:53.376143  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:12:53.376245  355913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:12:53.376290  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	W1002 11:12:53.376373  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:12:53.376465  355913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:12:53.376495  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:12:53.378820  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.379175  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.379212  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:53.379267  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.379362  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:12:53.379545  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:12:53.379639  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:53.379679  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:53.379722  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:12:53.379834  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:12:53.379903  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:12:53.380004  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:12:53.380148  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:12:53.380280  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:12:53.620373  355913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:12:53.620374  355913 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 11:12:53.626923  355913 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 11:12:53.627086  355913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:12:53.627165  355913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:12:53.636167  355913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 11:12:53.636191  355913 start.go:469] detecting cgroup driver to use...
	I1002 11:12:53.636261  355913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:12:53.650742  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:12:53.664297  355913 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:12:53.664363  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:12:53.677511  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:12:53.690474  355913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:12:53.823814  355913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:12:53.975710  355913 docker.go:213] disabling docker service ...
	I1002 11:12:53.975781  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:12:53.990989  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:12:54.004058  355913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:12:54.133009  355913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:12:54.265700  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:12:54.279282  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:12:54.297434  355913 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 11:12:54.297487  355913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:12:54.297541  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:12:54.307775  355913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:12:54.307838  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:12:54.319057  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:12:54.329204  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:12:54.339370  355913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:12:54.350263  355913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:12:54.360179  355913 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 11:12:54.360261  355913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:12:54.369400  355913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:12:54.498304  355913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:12:54.729555  355913 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:12:54.729629  355913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:12:54.734853  355913 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 11:12:54.734880  355913 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 11:12:54.734890  355913 command_runner.go:130] > Device: 16h/22d	Inode: 1200        Links: 1
	I1002 11:12:54.734900  355913 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:12:54.734909  355913 command_runner.go:130] > Access: 2023-10-02 11:12:54.659233261 +0000
	I1002 11:12:54.734918  355913 command_runner.go:130] > Modify: 2023-10-02 11:12:54.659233261 +0000
	I1002 11:12:54.734933  355913 command_runner.go:130] > Change: 2023-10-02 11:12:54.659233261 +0000
	I1002 11:12:54.734938  355913 command_runner.go:130] >  Birth: -
	I1002 11:12:54.734964  355913 start.go:537] Will wait 60s for crictl version
	I1002 11:12:54.735018  355913 ssh_runner.go:195] Run: which crictl
	I1002 11:12:54.738928  355913 command_runner.go:130] > /usr/bin/crictl
	I1002 11:12:54.739071  355913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:12:54.779300  355913 command_runner.go:130] > Version:  0.1.0
	I1002 11:12:54.779322  355913 command_runner.go:130] > RuntimeName:  cri-o
	I1002 11:12:54.779327  355913 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1002 11:12:54.779333  355913 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 11:12:54.779352  355913 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:12:54.779433  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:12:54.829300  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:12:54.829333  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:12:54.829360  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:12:54.829369  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:12:54.829377  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:12:54.829384  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:12:54.829391  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:12:54.829398  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:12:54.829411  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:12:54.829420  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:12:54.829428  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:12:54.829432  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:12:54.829511  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:12:54.875654  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:12:54.875684  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:12:54.875694  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:12:54.875700  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:12:54.875711  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:12:54.875720  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:12:54.875726  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:12:54.875733  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:12:54.875742  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:12:54.875757  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:12:54.875768  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:12:54.875778  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:12:54.877763  355913 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:12:54.879301  355913 out.go:177]   - env NO_PROXY=192.168.39.165
	I1002 11:12:54.880802  355913 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:12:54.883323  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:54.883701  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:12:54.883735  355913 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:12:54.883896  355913 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:12:54.888063  355913 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1002 11:12:54.888141  355913 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116 for IP: 192.168.39.135
	I1002 11:12:54.888167  355913 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:12:54.888303  355913 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:12:54.888357  355913 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:12:54.888376  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:12:54.888397  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:12:54.888414  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:12:54.888427  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:12:54.888520  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:12:54.888559  355913 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:12:54.888628  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:12:54.888683  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:12:54.888718  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:12:54.888751  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:12:54.888806  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:12:54.888845  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 11:12:54.888867  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:12:54.888886  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 11:12:54.889289  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:12:54.913249  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:12:54.936998  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:12:54.960148  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:12:54.982668  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:12:55.004626  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:12:55.027893  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:12:55.051229  355913 ssh_runner.go:195] Run: openssl version
	I1002 11:12:55.056610  355913 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 11:12:55.056682  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:12:55.065942  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:12:55.070184  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:12:55.070206  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:12:55.070238  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:12:55.075303  355913 command_runner.go:130] > 3ec20f2e
	I1002 11:12:55.075498  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:12:55.083032  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:12:55.092077  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:12:55.096465  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:12:55.096487  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:12:55.096517  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:12:55.101584  355913 command_runner.go:130] > b5213941
	I1002 11:12:55.101901  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:12:55.109644  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:12:55.118633  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:12:55.122842  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:12:55.123098  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:12:55.123148  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:12:55.128195  355913 command_runner.go:130] > 51391683
	I1002 11:12:55.128441  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:12:55.136318  355913 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:12:55.140143  355913 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:12:55.140190  355913 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:12:55.140277  355913 ssh_runner.go:195] Run: crio config
	I1002 11:12:55.188814  355913 command_runner.go:130] ! time="2023-10-02 11:12:55.183038414Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1002 11:12:55.188852  355913 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 11:12:55.197137  355913 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 11:12:55.197162  355913 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 11:12:55.197169  355913 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 11:12:55.197173  355913 command_runner.go:130] > #
	I1002 11:12:55.197180  355913 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 11:12:55.197190  355913 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 11:12:55.197199  355913 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 11:12:55.197210  355913 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 11:12:55.197216  355913 command_runner.go:130] > # reload'.
	I1002 11:12:55.197225  355913 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 11:12:55.197235  355913 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 11:12:55.197245  355913 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 11:12:55.197260  355913 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 11:12:55.197265  355913 command_runner.go:130] > [crio]
	I1002 11:12:55.197272  355913 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 11:12:55.197295  355913 command_runner.go:130] > # containers images, in this directory.
	I1002 11:12:55.197306  355913 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1002 11:12:55.197322  355913 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 11:12:55.197334  355913 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1002 11:12:55.197345  355913 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 11:12:55.197357  355913 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 11:12:55.197365  355913 command_runner.go:130] > storage_driver = "overlay"
	I1002 11:12:55.197371  355913 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 11:12:55.197379  355913 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 11:12:55.197384  355913 command_runner.go:130] > storage_option = [
	I1002 11:12:55.197391  355913 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1002 11:12:55.197395  355913 command_runner.go:130] > ]
	I1002 11:12:55.197403  355913 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 11:12:55.197409  355913 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 11:12:55.197416  355913 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 11:12:55.197421  355913 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 11:12:55.197429  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 11:12:55.197434  355913 command_runner.go:130] > # always happen on a node reboot
	I1002 11:12:55.197446  355913 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 11:12:55.197452  355913 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 11:12:55.197458  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 11:12:55.197477  355913 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 11:12:55.197484  355913 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 11:12:55.197492  355913 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 11:12:55.197502  355913 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 11:12:55.197507  355913 command_runner.go:130] > # internal_wipe = true
	I1002 11:12:55.197512  355913 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 11:12:55.197519  355913 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 11:12:55.197526  355913 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 11:12:55.197532  355913 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 11:12:55.197540  355913 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 11:12:55.197545  355913 command_runner.go:130] > [crio.api]
	I1002 11:12:55.197552  355913 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 11:12:55.197557  355913 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 11:12:55.197563  355913 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 11:12:55.197567  355913 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 11:12:55.197579  355913 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 11:12:55.197589  355913 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 11:12:55.197600  355913 command_runner.go:130] > # stream_port = "0"
	I1002 11:12:55.197609  355913 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 11:12:55.197617  355913 command_runner.go:130] > # stream_enable_tls = false
	I1002 11:12:55.197623  355913 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 11:12:55.197627  355913 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 11:12:55.197633  355913 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 11:12:55.197639  355913 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 11:12:55.197643  355913 command_runner.go:130] > # minutes.
	I1002 11:12:55.197647  355913 command_runner.go:130] > # stream_tls_cert = ""
	I1002 11:12:55.197653  355913 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 11:12:55.197658  355913 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 11:12:55.197663  355913 command_runner.go:130] > # stream_tls_key = ""
	I1002 11:12:55.197669  355913 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 11:12:55.197678  355913 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 11:12:55.197687  355913 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 11:12:55.197695  355913 command_runner.go:130] > # stream_tls_ca = ""
	I1002 11:12:55.197709  355913 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:12:55.197717  355913 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1002 11:12:55.197724  355913 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:12:55.197731  355913 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1002 11:12:55.197749  355913 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 11:12:55.197757  355913 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 11:12:55.197764  355913 command_runner.go:130] > [crio.runtime]
	I1002 11:12:55.197777  355913 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 11:12:55.197788  355913 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 11:12:55.197798  355913 command_runner.go:130] > # "nofile=1024:2048"
	I1002 11:12:55.197807  355913 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 11:12:55.197813  355913 command_runner.go:130] > # default_ulimits = [
	I1002 11:12:55.197817  355913 command_runner.go:130] > # ]
	I1002 11:12:55.197825  355913 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 11:12:55.197830  355913 command_runner.go:130] > # no_pivot = false
	I1002 11:12:55.197837  355913 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 11:12:55.197843  355913 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 11:12:55.197852  355913 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 11:12:55.197867  355913 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 11:12:55.197875  355913 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 11:12:55.197889  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:12:55.197900  355913 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1002 11:12:55.197909  355913 command_runner.go:130] > # Cgroup setting for conmon
	I1002 11:12:55.197920  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 11:12:55.197926  355913 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 11:12:55.197933  355913 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 11:12:55.197944  355913 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 11:12:55.197959  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:12:55.197969  355913 command_runner.go:130] > conmon_env = [
	I1002 11:12:55.197979  355913 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1002 11:12:55.197988  355913 command_runner.go:130] > ]
	I1002 11:12:55.197998  355913 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 11:12:55.198008  355913 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 11:12:55.198015  355913 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 11:12:55.198024  355913 command_runner.go:130] > # default_env = [
	I1002 11:12:55.198030  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198045  355913 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 11:12:55.198055  355913 command_runner.go:130] > # selinux = false
	I1002 11:12:55.198065  355913 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 11:12:55.198093  355913 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 11:12:55.198102  355913 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 11:12:55.198108  355913 command_runner.go:130] > # seccomp_profile = ""
	I1002 11:12:55.198121  355913 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 11:12:55.198132  355913 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 11:12:55.198145  355913 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 11:12:55.198156  355913 command_runner.go:130] > # which might increase security.
	I1002 11:12:55.198165  355913 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1002 11:12:55.198178  355913 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 11:12:55.198187  355913 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 11:12:55.198196  355913 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 11:12:55.198212  355913 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 11:12:55.198226  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:12:55.198237  355913 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 11:12:55.198249  355913 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 11:12:55.198261  355913 command_runner.go:130] > # the cgroup blockio controller.
	I1002 11:12:55.198285  355913 command_runner.go:130] > # blockio_config_file = ""
	I1002 11:12:55.198299  355913 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 11:12:55.198306  355913 command_runner.go:130] > # irqbalance daemon.
	I1002 11:12:55.198319  355913 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 11:12:55.198332  355913 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 11:12:55.198344  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:12:55.198368  355913 command_runner.go:130] > # rdt_config_file = ""
	I1002 11:12:55.198382  355913 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 11:12:55.198393  355913 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 11:12:55.198405  355913 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 11:12:55.198416  355913 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 11:12:55.198429  355913 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 11:12:55.198438  355913 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 11:12:55.198442  355913 command_runner.go:130] > # will be added.
	I1002 11:12:55.198448  355913 command_runner.go:130] > # default_capabilities = [
	I1002 11:12:55.198454  355913 command_runner.go:130] > # 	"CHOWN",
	I1002 11:12:55.198462  355913 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 11:12:55.198470  355913 command_runner.go:130] > # 	"FSETID",
	I1002 11:12:55.198480  355913 command_runner.go:130] > # 	"FOWNER",
	I1002 11:12:55.198487  355913 command_runner.go:130] > # 	"SETGID",
	I1002 11:12:55.198497  355913 command_runner.go:130] > # 	"SETUID",
	I1002 11:12:55.198503  355913 command_runner.go:130] > # 	"SETPCAP",
	I1002 11:12:55.198513  355913 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 11:12:55.198519  355913 command_runner.go:130] > # 	"KILL",
	I1002 11:12:55.198526  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198534  355913 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 11:12:55.198547  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:12:55.198555  355913 command_runner.go:130] > # default_sysctls = [
	I1002 11:12:55.198564  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198573  355913 command_runner.go:130] > # List of devices on the host that a
	I1002 11:12:55.198586  355913 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 11:12:55.198596  355913 command_runner.go:130] > # allowed_devices = [
	I1002 11:12:55.198603  355913 command_runner.go:130] > # 	"/dev/fuse",
	I1002 11:12:55.198610  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198615  355913 command_runner.go:130] > # List of additional devices. specified as
	I1002 11:12:55.198631  355913 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 11:12:55.198644  355913 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 11:12:55.198673  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:12:55.198685  355913 command_runner.go:130] > # additional_devices = [
	I1002 11:12:55.198691  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198697  355913 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 11:12:55.198701  355913 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 11:12:55.198706  355913 command_runner.go:130] > # 	"/etc/cdi",
	I1002 11:12:55.198713  355913 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 11:12:55.198721  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198731  355913 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 11:12:55.198744  355913 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 11:12:55.198751  355913 command_runner.go:130] > # Defaults to false.
	I1002 11:12:55.198760  355913 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 11:12:55.198773  355913 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 11:12:55.198784  355913 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 11:12:55.198789  355913 command_runner.go:130] > # hooks_dir = [
	I1002 11:12:55.198800  355913 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 11:12:55.198810  355913 command_runner.go:130] > # ]
	I1002 11:12:55.198821  355913 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 11:12:55.198835  355913 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 11:12:55.198844  355913 command_runner.go:130] > # its default mounts from the following two files:
	I1002 11:12:55.198852  355913 command_runner.go:130] > #
	I1002 11:12:55.198863  355913 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 11:12:55.198873  355913 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 11:12:55.198881  355913 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 11:12:55.198890  355913 command_runner.go:130] > #
	I1002 11:12:55.198901  355913 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 11:12:55.198914  355913 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 11:12:55.198929  355913 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 11:12:55.198940  355913 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 11:12:55.198949  355913 command_runner.go:130] > #
	I1002 11:12:55.198954  355913 command_runner.go:130] > # default_mounts_file = ""
	I1002 11:12:55.198960  355913 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 11:12:55.198974  355913 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 11:12:55.198985  355913 command_runner.go:130] > pids_limit = 1024
	I1002 11:12:55.198996  355913 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 11:12:55.199009  355913 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 11:12:55.199024  355913 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 11:12:55.199038  355913 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 11:12:55.199046  355913 command_runner.go:130] > # log_size_max = -1
	I1002 11:12:55.199056  355913 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 11:12:55.199067  355913 command_runner.go:130] > # log_to_journald = false
	I1002 11:12:55.199085  355913 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 11:12:55.199097  355913 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 11:12:55.199109  355913 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 11:12:55.199120  355913 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 11:12:55.199130  355913 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 11:12:55.199135  355913 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 11:12:55.199147  355913 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 11:12:55.199158  355913 command_runner.go:130] > # read_only = false
	I1002 11:12:55.199172  355913 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 11:12:55.199185  355913 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 11:12:55.199195  355913 command_runner.go:130] > # live configuration reload.
	I1002 11:12:55.199205  355913 command_runner.go:130] > # log_level = "info"
	I1002 11:12:55.199213  355913 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 11:12:55.199222  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:12:55.199228  355913 command_runner.go:130] > # log_filter = ""
	I1002 11:12:55.199243  355913 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 11:12:55.199256  355913 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 11:12:55.199266  355913 command_runner.go:130] > # separated by comma.
	I1002 11:12:55.199274  355913 command_runner.go:130] > # uid_mappings = ""
	I1002 11:12:55.199288  355913 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 11:12:55.199299  355913 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 11:12:55.199306  355913 command_runner.go:130] > # separated by comma.
	I1002 11:12:55.199313  355913 command_runner.go:130] > # gid_mappings = ""
	I1002 11:12:55.199323  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 11:12:55.199337  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:12:55.199350  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:12:55.199360  355913 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 11:12:55.199373  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 11:12:55.199384  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:12:55.199394  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:12:55.199405  355913 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 11:12:55.199419  355913 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 11:12:55.199432  355913 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 11:12:55.199444  355913 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 11:12:55.199454  355913 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 11:12:55.199464  355913 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 11:12:55.199474  355913 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 11:12:55.199479  355913 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 11:12:55.199487  355913 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 11:12:55.199502  355913 command_runner.go:130] > drop_infra_ctr = false
	I1002 11:12:55.199515  355913 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 11:12:55.199528  355913 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 11:12:55.199543  355913 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 11:12:55.199553  355913 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 11:12:55.199562  355913 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 11:12:55.199570  355913 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 11:12:55.199581  355913 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 11:12:55.199597  355913 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 11:12:55.199607  355913 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1002 11:12:55.199621  355913 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 11:12:55.199634  355913 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 11:12:55.199646  355913 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 11:12:55.199655  355913 command_runner.go:130] > # default_runtime = "runc"
	I1002 11:12:55.199664  355913 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 11:12:55.199680  355913 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 11:12:55.199697  355913 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 11:12:55.199709  355913 command_runner.go:130] > # creation as a file is not desired either.
	I1002 11:12:55.199723  355913 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 11:12:55.199731  355913 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 11:12:55.199736  355913 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 11:12:55.199745  355913 command_runner.go:130] > # ]
	I1002 11:12:55.199756  355913 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 11:12:55.199770  355913 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 11:12:55.199784  355913 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 11:12:55.199797  355913 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 11:12:55.199806  355913 command_runner.go:130] > #
	I1002 11:12:55.199813  355913 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 11:12:55.199820  355913 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 11:12:55.199827  355913 command_runner.go:130] > #  runtime_type = "oci"
	I1002 11:12:55.199839  355913 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 11:12:55.199848  355913 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 11:12:55.199858  355913 command_runner.go:130] > #  allowed_annotations = []
	I1002 11:12:55.199866  355913 command_runner.go:130] > # Where:
	I1002 11:12:55.199875  355913 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 11:12:55.199888  355913 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 11:12:55.199900  355913 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 11:12:55.199911  355913 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 11:12:55.199920  355913 command_runner.go:130] > #   in $PATH.
	I1002 11:12:55.199934  355913 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 11:12:55.199945  355913 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 11:12:55.199956  355913 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 11:12:55.199963  355913 command_runner.go:130] > #   state.
	I1002 11:12:55.199976  355913 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 11:12:55.199986  355913 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 11:12:55.199993  355913 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 11:12:55.200005  355913 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 11:12:55.200019  355913 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 11:12:55.200041  355913 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 11:12:55.200052  355913 command_runner.go:130] > #   The currently recognized values are:
	I1002 11:12:55.200066  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 11:12:55.200076  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 11:12:55.200089  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 11:12:55.200103  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 11:12:55.200119  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 11:12:55.200133  355913 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 11:12:55.200146  355913 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 11:12:55.200158  355913 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 11:12:55.200165  355913 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 11:12:55.200173  355913 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 11:12:55.200184  355913 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1002 11:12:55.200194  355913 command_runner.go:130] > runtime_type = "oci"
	I1002 11:12:55.200203  355913 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 11:12:55.200213  355913 command_runner.go:130] > runtime_config_path = ""
	I1002 11:12:55.200222  355913 command_runner.go:130] > monitor_path = ""
	I1002 11:12:55.200230  355913 command_runner.go:130] > monitor_cgroup = ""
	I1002 11:12:55.200237  355913 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 11:12:55.200247  355913 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 11:12:55.200251  355913 command_runner.go:130] > # running containers
	I1002 11:12:55.200259  355913 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 11:12:55.200273  355913 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 11:12:55.200307  355913 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 11:12:55.200319  355913 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 11:12:55.200329  355913 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 11:12:55.200334  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 11:12:55.200344  355913 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 11:12:55.200352  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 11:12:55.200364  355913 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 11:12:55.200375  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 11:12:55.200389  355913 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 11:12:55.200401  355913 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 11:12:55.200413  355913 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 11:12:55.200423  355913 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 11:12:55.200438  355913 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 11:12:55.200452  355913 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 11:12:55.200469  355913 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 11:12:55.200485  355913 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 11:12:55.200499  355913 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 11:12:55.200510  355913 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 11:12:55.200519  355913 command_runner.go:130] > # Example:
	I1002 11:12:55.200531  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 11:12:55.200543  355913 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 11:12:55.200554  355913 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 11:12:55.200567  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 11:12:55.200576  355913 command_runner.go:130] > # cpuset = 0
	I1002 11:12:55.200583  355913 command_runner.go:130] > # cpushares = "0-1"
	I1002 11:12:55.200591  355913 command_runner.go:130] > # Where:
	I1002 11:12:55.200598  355913 command_runner.go:130] > # The workload name is workload-type.
	I1002 11:12:55.200613  355913 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 11:12:55.200625  355913 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 11:12:55.200638  355913 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 11:12:55.200654  355913 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 11:12:55.200666  355913 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 11:12:55.200674  355913 command_runner.go:130] > # 
	I1002 11:12:55.200682  355913 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 11:12:55.200690  355913 command_runner.go:130] > #
	I1002 11:12:55.200700  355913 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 11:12:55.200714  355913 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 11:12:55.200725  355913 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 11:12:55.200739  355913 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 11:12:55.200752  355913 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 11:12:55.200761  355913 command_runner.go:130] > [crio.image]
	I1002 11:12:55.200767  355913 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 11:12:55.200777  355913 command_runner.go:130] > # default_transport = "docker://"
	I1002 11:12:55.200789  355913 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 11:12:55.200803  355913 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:12:55.200811  355913 command_runner.go:130] > # global_auth_file = ""
	I1002 11:12:55.200822  355913 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 11:12:55.200834  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:12:55.200845  355913 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 11:12:55.200854  355913 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 11:12:55.200863  355913 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:12:55.200875  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:12:55.200886  355913 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 11:12:55.200896  355913 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 11:12:55.200909  355913 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 11:12:55.200922  355913 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 11:12:55.200933  355913 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 11:12:55.200940  355913 command_runner.go:130] > # pause_command = "/pause"
	I1002 11:12:55.200950  355913 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 11:12:55.200961  355913 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 11:12:55.200975  355913 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 11:12:55.200985  355913 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 11:12:55.200997  355913 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 11:12:55.201008  355913 command_runner.go:130] > # signature_policy = ""
	I1002 11:12:55.201020  355913 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 11:12:55.201029  355913 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 11:12:55.201041  355913 command_runner.go:130] > # changing them here.
	I1002 11:12:55.201051  355913 command_runner.go:130] > # insecure_registries = [
	I1002 11:12:55.201058  355913 command_runner.go:130] > # ]
	I1002 11:12:55.201073  355913 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 11:12:55.201088  355913 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 11:12:55.201100  355913 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 11:12:55.201109  355913 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 11:12:55.201117  355913 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 11:12:55.201130  355913 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 11:12:55.201141  355913 command_runner.go:130] > # CNI plugins.
	I1002 11:12:55.201151  355913 command_runner.go:130] > [crio.network]
	I1002 11:12:55.201164  355913 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 11:12:55.201176  355913 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 11:12:55.201186  355913 command_runner.go:130] > # cni_default_network = ""
	I1002 11:12:55.201196  355913 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 11:12:55.201206  355913 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 11:12:55.201219  355913 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 11:12:55.201229  355913 command_runner.go:130] > # plugin_dirs = [
	I1002 11:12:55.201239  355913 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 11:12:55.201248  355913 command_runner.go:130] > # ]
	I1002 11:12:55.201261  355913 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 11:12:55.201271  355913 command_runner.go:130] > [crio.metrics]
	I1002 11:12:55.201281  355913 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 11:12:55.201290  355913 command_runner.go:130] > enable_metrics = true
	I1002 11:12:55.201301  355913 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 11:12:55.201310  355913 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 11:12:55.201323  355913 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 11:12:55.201337  355913 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 11:12:55.201350  355913 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 11:12:55.201360  355913 command_runner.go:130] > # metrics_collectors = [
	I1002 11:12:55.201367  355913 command_runner.go:130] > # 	"operations",
	I1002 11:12:55.201373  355913 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 11:12:55.201384  355913 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 11:12:55.201393  355913 command_runner.go:130] > # 	"operations_errors",
	I1002 11:12:55.201405  355913 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 11:12:55.201415  355913 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 11:12:55.201426  355913 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 11:12:55.201436  355913 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 11:12:55.201446  355913 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 11:12:55.201453  355913 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 11:12:55.201458  355913 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 11:12:55.201464  355913 command_runner.go:130] > # 	"containers_oom_total",
	I1002 11:12:55.201468  355913 command_runner.go:130] > # 	"containers_oom",
	I1002 11:12:55.201475  355913 command_runner.go:130] > # 	"processes_defunct",
	I1002 11:12:55.201481  355913 command_runner.go:130] > # 	"operations_total",
	I1002 11:12:55.201493  355913 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 11:12:55.201505  355913 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 11:12:55.201516  355913 command_runner.go:130] > # 	"operations_errors_total",
	I1002 11:12:55.201526  355913 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 11:12:55.201537  355913 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 11:12:55.201548  355913 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 11:12:55.201557  355913 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 11:12:55.201561  355913 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 11:12:55.201566  355913 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 11:12:55.201573  355913 command_runner.go:130] > # ]
	I1002 11:12:55.201578  355913 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 11:12:55.201584  355913 command_runner.go:130] > # metrics_port = 9090
	I1002 11:12:55.201590  355913 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 11:12:55.201594  355913 command_runner.go:130] > # metrics_socket = ""
	I1002 11:12:55.201600  355913 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 11:12:55.201610  355913 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 11:12:55.201618  355913 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 11:12:55.201622  355913 command_runner.go:130] > # certificate on any modification event.
	I1002 11:12:55.201629  355913 command_runner.go:130] > # metrics_cert = ""
	I1002 11:12:55.201642  355913 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 11:12:55.201650  355913 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 11:12:55.201666  355913 command_runner.go:130] > # metrics_key = ""
	I1002 11:12:55.201679  355913 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 11:12:55.201689  355913 command_runner.go:130] > [crio.tracing]
	I1002 11:12:55.201701  355913 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 11:12:55.201708  355913 command_runner.go:130] > # enable_tracing = false
	I1002 11:12:55.201714  355913 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 11:12:55.201721  355913 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 11:12:55.201729  355913 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 11:12:55.201736  355913 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 11:12:55.201742  355913 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 11:12:55.201749  355913 command_runner.go:130] > [crio.stats]
	I1002 11:12:55.201756  355913 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 11:12:55.201764  355913 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 11:12:55.201771  355913 command_runner.go:130] > # stats_collection_period = 0
	I1002 11:12:55.201856  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:12:55.201869  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:12:55.201887  355913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:12:55.201909  355913 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-224116 NodeName:multinode-224116-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:12:55.202035  355913 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-224116-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:12:55.202095  355913 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-224116-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:12:55.202146  355913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:12:55.212747  355913 command_runner.go:130] > kubeadm
	I1002 11:12:55.212772  355913 command_runner.go:130] > kubectl
	I1002 11:12:55.212779  355913 command_runner.go:130] > kubelet
	I1002 11:12:55.212915  355913 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:12:55.212998  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 11:12:55.221352  355913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1002 11:12:55.237659  355913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:12:55.251683  355913 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1002 11:12:55.255138  355913 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I1002 11:12:55.255326  355913 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:12:55.255642  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:12:55.255695  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:12:55.255702  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:12:55.270399  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I1002 11:12:55.270865  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:12:55.271334  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:12:55.271354  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:12:55.271639  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:12:55.271822  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:12:55.271945  355913 start.go:304] JoinCluster: &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:12:55.272054  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 11:12:55.272071  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:12:55.274786  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:12:55.275171  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:12:55.275202  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:12:55.275378  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:12:55.275562  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:12:55.275720  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:12:55.275835  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:12:55.451083  355913 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7riuj7.saztpvh38iuvvb6y --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:12:55.453146  355913 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:12:55.453186  355913 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:12:55.453476  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:12:55.453524  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:12:55.468455  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I1002 11:12:55.468894  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:12:55.469322  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:12:55.469347  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:12:55.469660  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:12:55.469854  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:12:55.470045  355913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-224116-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 11:12:55.470072  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:12:55.472795  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:12:55.473241  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:12:55.473270  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:12:55.473376  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:12:55.473538  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:12:55.473724  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:12:55.473883  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:12:55.684419  355913 command_runner.go:130] > node/multinode-224116-m02 cordoned
	I1002 11:12:58.720212  355913 command_runner.go:130] > pod "busybox-5bc68d56bd-jjswt" has DeletionTimestamp older than 1 seconds, skipping
	I1002 11:12:58.720242  355913 command_runner.go:130] > node/multinode-224116-m02 drained
	I1002 11:12:58.722249  355913 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 11:12:58.722279  355913 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-crtcw, kube-system/kube-proxy-rdt77
	I1002 11:12:58.722311  355913 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-224116-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.252234913s)
	I1002 11:12:58.722330  355913 node.go:108] successfully drained node "m02"
	I1002 11:12:58.722885  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:12:58.723223  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:12:58.723753  355913 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 11:12:58.723812  355913 round_trippers.go:463] DELETE https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:12:58.723818  355913 round_trippers.go:469] Request Headers:
	I1002 11:12:58.723826  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:12:58.723834  355913 round_trippers.go:473]     Content-Type: application/json
	I1002 11:12:58.723841  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:12:58.739008  355913 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1002 11:12:58.739027  355913 round_trippers.go:577] Response Headers:
	I1002 11:12:58.739034  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:12:58 GMT
	I1002 11:12:58.739039  355913 round_trippers.go:580]     Audit-Id: ac07f854-5adb-4f9e-a46c-15778b1c9933
	I1002 11:12:58.739044  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:12:58.739049  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:12:58.739054  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:12:58.739059  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:12:58.739064  355913 round_trippers.go:580]     Content-Length: 171
	I1002 11:12:58.739209  355913 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-224116-m02","kind":"nodes","uid":"54af60b7-dde6-4332-b29e-9bbfe8fedb15"}}
	I1002 11:12:58.739251  355913 node.go:124] successfully deleted node "m02"
	I1002 11:12:58.739265  355913 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:12:58.739289  355913 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:12:58.739311  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7riuj7.saztpvh38iuvvb6y --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-224116-m02"
	I1002 11:12:58.790268  355913 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:12:58.973834  355913 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:12:58.973873  355913 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:12:59.039633  355913 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:12:59.040624  355913 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:12:59.040643  355913 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 11:12:59.176394  355913 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 11:12:59.696645  355913 command_runner.go:130] > This node has joined the cluster:
	I1002 11:12:59.696679  355913 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 11:12:59.696698  355913 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 11:12:59.696708  355913 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 11:12:59.699311  355913 command_runner.go:130] ! W1002 11:12:58.784349    2613 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1002 11:12:59.699338  355913 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:12:59.699355  355913 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:12:59.699367  355913 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:12:59.699423  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 11:12:59.968090  355913 start.go:306] JoinCluster complete in 4.696138123s
	I1002 11:12:59.968131  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:12:59.968139  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:12:59.968198  355913 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:12:59.974374  355913 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 11:12:59.974405  355913 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 11:12:59.974416  355913 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 11:12:59.974426  355913 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:12:59.974435  355913 command_runner.go:130] > Access: 2023-10-02 11:10:34.846172782 +0000
	I1002 11:12:59.974443  355913 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 11:12:59.974451  355913 command_runner.go:130] > Change: 2023-10-02 11:10:33.014172782 +0000
	I1002 11:12:59.974458  355913 command_runner.go:130] >  Birth: -
	I1002 11:12:59.974513  355913 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:12:59.974529  355913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:12:59.994109  355913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:13:00.410543  355913 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:13:00.410583  355913 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:13:00.410593  355913 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 11:13:00.410601  355913 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 11:13:00.411059  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:13:00.411290  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:13:00.411641  355913 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:13:00.411655  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.411663  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.411669  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.413879  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:00.413899  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.413906  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.413913  355913 round_trippers.go:580]     Content-Length: 291
	I1002 11:13:00.413918  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.413931  355913 round_trippers.go:580]     Audit-Id: 3a7845a3-aa54-4555-ab8f-7b18b7e61cd0
	I1002 11:13:00.413942  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.413954  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.413965  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.413992  355913 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"860","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 11:13:00.414110  355913 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-224116" context rescaled to 1 replicas
	I1002 11:13:00.414136  355913 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 11:13:00.415859  355913 out.go:177] * Verifying Kubernetes components...
	I1002 11:13:00.417137  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:13:00.431711  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:13:00.431921  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:13:00.432157  355913 node_ready.go:35] waiting up to 6m0s for node "multinode-224116-m02" to be "Ready" ...
	I1002 11:13:00.432217  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:13:00.432224  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.432232  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.432240  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.434443  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:00.434469  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.434480  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.434488  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.434497  355913 round_trippers.go:580]     Audit-Id: 9d3d17e8-619b-483f-8ccc-64fffa87750a
	I1002 11:13:00.434510  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.434521  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.434532  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.434802  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"e1be40d7-fc74-480d-ac71-0bbc41a5beee","resourceVersion":"1003","creationTimestamp":"2023-10-02T11:12:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:13:00.435114  355913 node_ready.go:49] node "multinode-224116-m02" has status "Ready":"True"
	I1002 11:13:00.435132  355913 node_ready.go:38] duration metric: took 2.960151ms waiting for node "multinode-224116-m02" to be "Ready" ...
	I1002 11:13:00.435140  355913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:13:00.435197  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:13:00.435204  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.435212  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.435221  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.438677  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:13:00.438699  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.438709  355913 round_trippers.go:580]     Audit-Id: 58494b57-2bfe-419b-bd4d-f13e0869b3ed
	I1002 11:13:00.438717  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.438726  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.438735  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.438747  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.438759  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.440761  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1012"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82246 chars]
	I1002 11:13:00.443554  355913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.443624  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:13:00.443632  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.443639  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.443645  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.445900  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:00.445920  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.445928  355913 round_trippers.go:580]     Audit-Id: 8645c013-ed01-4b4b-b6b7-09d0375e524b
	I1002 11:13:00.445936  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.445943  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.445951  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.445961  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.445972  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.446109  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1002 11:13:00.446568  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:00.446582  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.446589  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.446595  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.448748  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:00.448766  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.448776  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.448785  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.448799  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.448806  355913 round_trippers.go:580]     Audit-Id: 6887f97b-11d3-4bbe-8c44-a2292487e217
	I1002 11:13:00.448820  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.448830  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.448975  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:00.449387  355913 pod_ready.go:92] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:00.449406  355913 pod_ready.go:81] duration metric: took 5.82953ms waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.449418  355913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.449473  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:13:00.449481  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.449487  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.449493  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.451444  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.451461  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.451470  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.451479  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.451491  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.451506  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.451515  355913 round_trippers.go:580]     Audit-Id: d64507a6-63ff-46f4-aec3-54a9e4a4775f
	I1002 11:13:00.451528  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.451856  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"835","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1002 11:13:00.452268  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:00.452283  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.452290  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.452295  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.454037  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.454054  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.454061  355913 round_trippers.go:580]     Audit-Id: 34cda20a-dd0d-4695-90ae-fe6ebd5b5b2b
	I1002 11:13:00.454069  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.454077  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.454085  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.454092  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.454100  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.454334  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:00.454713  355913 pod_ready.go:92] pod "etcd-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:00.454731  355913 pod_ready.go:81] duration metric: took 5.30623ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.454747  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.454791  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:13:00.454800  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.454806  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.454812  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.456817  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.456831  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.456840  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.456849  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.456858  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.456873  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.456882  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.456895  355913 round_trippers.go:580]     Audit-Id: 17947aec-9454-4e98-ba20-8e26f0a6cbc0
	I1002 11:13:00.457408  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"862","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1002 11:13:00.457750  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:00.457765  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.457775  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.457783  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.459741  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.459761  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.459770  355913 round_trippers.go:580]     Audit-Id: dd12aa50-bccc-4c8b-8fe5-4da4302570ee
	I1002 11:13:00.459778  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.459785  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.459794  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.459801  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.459818  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.459994  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:00.460292  355913 pod_ready.go:92] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:00.460307  355913 pod_ready.go:81] duration metric: took 5.552661ms waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.460319  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.460370  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:13:00.460380  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.460390  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.460400  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.462046  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.462064  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.462073  355913 round_trippers.go:580]     Audit-Id: c311598d-7680-4447-90ea-506beeb8c9a7
	I1002 11:13:00.462086  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.462093  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.462101  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.462106  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.462112  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.462448  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"832","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1002 11:13:00.462789  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:00.462802  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.462812  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.462821  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.464398  355913 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 11:13:00.464417  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.464426  355913 round_trippers.go:580]     Audit-Id: c27d725d-cd47-4289-80e9-b14a1a08f960
	I1002 11:13:00.464435  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.464443  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.464456  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.464464  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.464475  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.464690  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:00.464957  355913 pod_ready.go:92] pod "kube-controller-manager-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:00.464971  355913 pod_ready.go:81] duration metric: took 4.644554ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.464982  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.632402  355913 request.go:629] Waited for 167.34936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:13:00.632473  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:13:00.632480  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.632491  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.632505  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.635830  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:13:00.635856  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.635867  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.635875  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.635884  355913 round_trippers.go:580]     Audit-Id: c77b809b-2d8c-41a3-b336-440429b07c5d
	I1002 11:13:00.635901  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.635909  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.635920  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.636058  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"683","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1002 11:13:00.832919  355913 request.go:629] Waited for 196.382784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:13:00.832985  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:13:00.832990  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:00.832998  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:00.833006  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:00.835778  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:00.835804  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:00.835815  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:00.835824  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:00 GMT
	I1002 11:13:00.835833  355913 round_trippers.go:580]     Audit-Id: 1faf665e-4616-446f-8fa3-6c15de4bbbc5
	I1002 11:13:00.835842  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:00.835852  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:00.835861  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:00.835989  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"60156cb0-4b83-40ca-ab0d-93bdf316a64a","resourceVersion":"707","creationTimestamp":"2023-10-02T11:03:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:03:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1002 11:13:00.836340  355913 pod_ready.go:92] pod "kube-proxy-8tg2f" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:00.836360  355913 pod_ready.go:81] duration metric: took 371.370566ms waiting for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:00.836374  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:01.032831  355913 request.go:629] Waited for 196.362687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:13:01.032899  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:13:01.032904  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:01.032918  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:01.032930  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:01.036001  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:13:01.036033  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:01.036051  355913 round_trippers.go:580]     Audit-Id: c66da4a3-c5c2-414a-88a7-e274a126f6a9
	I1002 11:13:01.036059  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:01.036068  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:01.036077  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:01.036088  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:01.036097  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:01 GMT
	I1002 11:13:01.036255  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"800","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:13:01.233285  355913 request.go:629] Waited for 196.42525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:01.233374  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:01.233383  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:01.233394  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:01.233404  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:01.235975  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:01.236000  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:01.236007  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:01.236012  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:01.236018  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:01.236025  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:01 GMT
	I1002 11:13:01.236033  355913 round_trippers.go:580]     Audit-Id: 37d4262b-4d89-4bb1-adf3-351d5c5eb449
	I1002 11:13:01.236041  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:01.236217  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:01.236672  355913 pod_ready.go:92] pod "kube-proxy-nshcj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:01.236695  355913 pod_ready.go:81] duration metric: took 400.305523ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:01.236708  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:01.433200  355913 request.go:629] Waited for 196.413592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:13:01.433278  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:13:01.433284  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:01.433296  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:01.433307  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:01.435509  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:01.435535  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:01.435544  355913 round_trippers.go:580]     Audit-Id: 41da79f1-64e9-4f20-8c1b-ac195ba8f070
	I1002 11:13:01.435552  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:01.435559  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:01.435567  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:01.435577  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:01.435585  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:01 GMT
	I1002 11:13:01.435823  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rdt77","generateName":"kube-proxy-","namespace":"kube-system","uid":"96482fa7-e7e4-4375-b3b6-cc24f41d4bcf","resourceVersion":"1024","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1002 11:13:01.632732  355913 request.go:629] Waited for 196.382354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:13:01.632809  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:13:01.632816  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:01.632829  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:01.632844  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:01.635453  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:01.635476  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:01.635487  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:01 GMT
	I1002 11:13:01.635496  355913 round_trippers.go:580]     Audit-Id: 5b826166-bdfc-4e25-b72e-a8dcc38fe878
	I1002 11:13:01.635504  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:01.635512  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:01.635520  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:01.635529  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:01.635712  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"e1be40d7-fc74-480d-ac71-0bbc41a5beee","resourceVersion":"1003","creationTimestamp":"2023-10-02T11:12:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:13:01.636028  355913 pod_ready.go:92] pod "kube-proxy-rdt77" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:01.636046  355913 pod_ready.go:81] duration metric: took 399.326871ms waiting for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:01.636055  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:01.832451  355913 request.go:629] Waited for 196.314107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:13:01.832534  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:13:01.832539  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:01.832548  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:01.832554  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:01.835379  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:13:01.835398  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:01.835405  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:01.835410  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:01.835415  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:01.835420  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:01.835435  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:01 GMT
	I1002 11:13:01.835443  355913 round_trippers.go:580]     Audit-Id: 82fc2ac0-bc45-4823-b4be-ce61371898b0
	I1002 11:13:01.835648  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"834","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1002 11:13:02.032377  355913 request.go:629] Waited for 196.298276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:02.032467  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:13:02.032476  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:02.032484  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:02.032492  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:02.036077  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:13:02.036099  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:02.036107  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:02.036112  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:02.036117  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:02.036127  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:02 GMT
	I1002 11:13:02.036136  355913 round_trippers.go:580]     Audit-Id: 7c77ca3e-23a6-42ae-9fb4-cc9781edc758
	I1002 11:13:02.036144  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:02.036320  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:13:02.036704  355913 pod_ready.go:92] pod "kube-scheduler-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:13:02.036721  355913 pod_ready.go:81] duration metric: took 400.659172ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:13:02.036733  355913 pod_ready.go:38] duration metric: took 1.601581755s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:13:02.036753  355913 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:13:02.036814  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:13:02.050257  355913 system_svc.go:56] duration metric: took 13.495388ms WaitForService to wait for kubelet.
	I1002 11:13:02.050287  355913 kubeadm.go:581] duration metric: took 1.63612606s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:13:02.050313  355913 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:13:02.232592  355913 request.go:629] Waited for 182.169744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I1002 11:13:02.232661  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:13:02.232665  355913 round_trippers.go:469] Request Headers:
	I1002 11:13:02.232673  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:13:02.232680  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:13:02.236036  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:13:02.236058  355913 round_trippers.go:577] Response Headers:
	I1002 11:13:02.236066  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:13:02.236071  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:13:02.236079  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:13:02.236091  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:13:02.236099  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:13:02 GMT
	I1002 11:13:02.236104  355913 round_trippers.go:580]     Audit-Id: 874ea972-1731-4176-81c6-6271def65e63
	I1002 11:13:02.236353  355913 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1028"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I1002 11:13:02.236907  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:13:02.236928  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:13:02.236941  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:13:02.236948  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:13:02.236954  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:13:02.236965  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:13:02.236971  355913 node_conditions.go:105] duration metric: took 186.651992ms to run NodePressure ...
	I1002 11:13:02.236986  355913 start.go:228] waiting for startup goroutines ...
	I1002 11:13:02.237016  355913 start.go:242] writing updated cluster config ...
	I1002 11:13:02.237460  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:13:02.237567  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:13:02.240393  355913 out.go:177] * Starting worker node multinode-224116-m03 in cluster multinode-224116
	I1002 11:13:02.241849  355913 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:13:02.241874  355913 cache.go:57] Caching tarball of preloaded images
	I1002 11:13:02.241980  355913 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:13:02.241996  355913 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:13:02.242100  355913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/config.json ...
	I1002 11:13:02.242289  355913 start.go:365] acquiring machines lock for multinode-224116-m03: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:13:02.242348  355913 start.go:369] acquired machines lock for "multinode-224116-m03" in 37.225µs
	I1002 11:13:02.242385  355913 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:13:02.242392  355913 fix.go:54] fixHost starting: m03
	I1002 11:13:02.242680  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:13:02.242727  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:13:02.257675  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I1002 11:13:02.258166  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:13:02.258689  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:13:02.258711  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:13:02.259048  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:13:02.259219  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:13:02.259403  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetState
	I1002 11:13:02.261024  355913 fix.go:102] recreateIfNeeded on multinode-224116-m03: state=Running err=<nil>
	W1002 11:13:02.261046  355913 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:13:02.263041  355913 out.go:177] * Updating the running kvm2 "multinode-224116-m03" VM ...
	I1002 11:13:02.264295  355913 machine.go:88] provisioning docker machine ...
	I1002 11:13:02.264321  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:13:02.264502  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetMachineName
	I1002 11:13:02.264674  355913 buildroot.go:166] provisioning hostname "multinode-224116-m03"
	I1002 11:13:02.264699  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetMachineName
	I1002 11:13:02.264812  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:13:02.267020  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.267551  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.267581  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.267771  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:13:02.267982  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.268137  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.268246  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:13:02.268393  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:13:02.268697  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1002 11:13:02.268709  355913 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-224116-m03 && echo "multinode-224116-m03" | sudo tee /etc/hostname
	I1002 11:13:02.392380  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-224116-m03
	
	I1002 11:13:02.392420  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:13:02.395464  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.395864  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.395895  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.396118  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:13:02.396322  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.396505  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.396654  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:13:02.396867  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:13:02.397223  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1002 11:13:02.397246  355913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-224116-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-224116-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-224116-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:13:02.507103  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:13:02.507138  355913 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:13:02.507160  355913 buildroot.go:174] setting up certificates
	I1002 11:13:02.507173  355913 provision.go:83] configureAuth start
	I1002 11:13:02.507187  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetMachineName
	I1002 11:13:02.507486  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetIP
	I1002 11:13:02.510131  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.510525  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.510562  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.510690  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:13:02.512924  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.513272  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.513306  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.513365  355913 provision.go:138] copyHostCerts
	I1002 11:13:02.513412  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:13:02.513452  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:13:02.513465  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:13:02.513556  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:13:02.513643  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:13:02.513661  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:13:02.513666  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:13:02.513695  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:13:02.513739  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:13:02.513755  355913 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:13:02.513762  355913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:13:02.513781  355913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:13:02.513827  355913 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.multinode-224116-m03 san=[192.168.39.195 192.168.39.195 localhost 127.0.0.1 minikube multinode-224116-m03]
	I1002 11:13:02.603973  355913 provision.go:172] copyRemoteCerts
	I1002 11:13:02.604031  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:13:02.604055  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:13:02.606673  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.607034  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.607075  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.607293  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:13:02.607489  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.607665  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:13:02.607790  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m03/id_rsa Username:docker}
	I1002 11:13:02.692397  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 11:13:02.692507  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:13:02.714713  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 11:13:02.714795  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 11:13:02.736120  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 11:13:02.736193  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:13:02.758271  355913 provision.go:86] duration metric: configureAuth took 251.080306ms
	I1002 11:13:02.758308  355913 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:13:02.758634  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:13:02.758712  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:13:02.761416  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.761802  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:13:02.761840  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:13:02.762047  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:13:02.762241  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.762413  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:13:02.762572  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:13:02.762757  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:13:02.763065  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1002 11:13:02.763082  355913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:14:33.251125  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:14:33.251168  355913 machine.go:91] provisioned docker machine in 1m30.986853677s
	I1002 11:14:33.251182  355913 start.go:300] post-start starting for "multinode-224116-m03" (driver="kvm2")
	I1002 11:14:33.251196  355913 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:14:33.251230  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:14:33.251567  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:14:33.251607  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:14:33.255065  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.255503  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:33.255539  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.255699  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:14:33.255912  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:14:33.256133  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:14:33.256322  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m03/id_rsa Username:docker}
	I1002 11:14:33.345219  355913 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:14:33.350383  355913 command_runner.go:130] > NAME=Buildroot
	I1002 11:14:33.350411  355913 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 11:14:33.350419  355913 command_runner.go:130] > ID=buildroot
	I1002 11:14:33.350427  355913 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 11:14:33.350431  355913 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 11:14:33.350466  355913 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:14:33.350484  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:14:33.350567  355913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:14:33.350664  355913 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:14:33.350679  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /etc/ssl/certs/3398652.pem
	I1002 11:14:33.350785  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:14:33.360384  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:14:33.384899  355913 start.go:303] post-start completed in 133.699986ms
	I1002 11:14:33.384924  355913 fix.go:56] fixHost completed within 1m31.142532128s
	I1002 11:14:33.384946  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:14:33.387647  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.388115  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:33.388150  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.388312  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:14:33.388529  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:14:33.388718  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:14:33.388877  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:14:33.389051  355913 main.go:141] libmachine: Using SSH client type: native
	I1002 11:14:33.389357  355913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1002 11:14:33.389369  355913 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:14:33.503136  355913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696245273.497071361
	
	I1002 11:14:33.503169  355913 fix.go:206] guest clock: 1696245273.497071361
	I1002 11:14:33.503179  355913 fix.go:219] Guest: 2023-10-02 11:14:33.497071361 +0000 UTC Remote: 2023-10-02 11:14:33.384928348 +0000 UTC m=+549.379609939 (delta=112.143013ms)
	I1002 11:14:33.503199  355913 fix.go:190] guest clock delta is within tolerance: 112.143013ms
	I1002 11:14:33.503205  355913 start.go:83] releasing machines lock for "multinode-224116-m03", held for 1m31.260829999s
	I1002 11:14:33.503233  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:14:33.503543  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetIP
	I1002 11:14:33.506290  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.506717  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:33.506756  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.509070  355913 out.go:177] * Found network options:
	I1002 11:14:33.510631  355913 out.go:177]   - NO_PROXY=192.168.39.165,192.168.39.135
	W1002 11:14:33.511912  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 11:14:33.511929  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:14:33.511943  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:14:33.512521  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:14:33.512702  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .DriverName
	I1002 11:14:33.512793  355913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:14:33.512832  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	W1002 11:14:33.512923  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 11:14:33.512943  355913 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 11:14:33.513009  355913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:14:33.513026  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHHostname
	I1002 11:14:33.515552  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.515758  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.515952  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:33.515979  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.516144  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:14:33.516288  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:14:33.516303  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:33.516339  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:33.516468  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHPort
	I1002 11:14:33.516546  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:14:33.516608  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHKeyPath
	I1002 11:14:33.516719  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m03/id_rsa Username:docker}
	I1002 11:14:33.516786  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetSSHUsername
	I1002 11:14:33.516932  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m03/id_rsa Username:docker}
	I1002 11:14:33.750682  355913 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 11:14:33.750691  355913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 11:14:33.756572  355913 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 11:14:33.756650  355913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:14:33.756722  355913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:14:33.766542  355913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 11:14:33.766570  355913 start.go:469] detecting cgroup driver to use...
	I1002 11:14:33.766643  355913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:14:33.781123  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:14:33.794303  355913 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:14:33.794385  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:14:33.808869  355913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:14:33.822440  355913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:14:33.956141  355913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:14:34.075342  355913 docker.go:213] disabling docker service ...
	I1002 11:14:34.075425  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:14:34.089190  355913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:14:34.102135  355913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:14:34.220215  355913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:14:34.340993  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:14:34.392653  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:14:34.438837  355913 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 11:14:34.440194  355913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:14:34.440275  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:14:34.458402  355913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:14:34.458486  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:14:34.467917  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:14:34.476859  355913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:14:34.485968  355913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:14:34.495359  355913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:14:34.503044  355913 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 11:14:34.503354  355913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:14:34.511332  355913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:14:34.642114  355913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:14:37.765272  355913 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.123113868s)
	I1002 11:14:37.765336  355913 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:14:37.765391  355913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:14:37.771117  355913 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 11:14:37.771136  355913 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 11:14:37.771146  355913 command_runner.go:130] > Device: 16h/22d	Inode: 1214        Links: 1
	I1002 11:14:37.771153  355913 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:14:37.771158  355913 command_runner.go:130] > Access: 2023-10-02 11:14:37.675294040 +0000
	I1002 11:14:37.771164  355913 command_runner.go:130] > Modify: 2023-10-02 11:14:37.675294040 +0000
	I1002 11:14:37.771168  355913 command_runner.go:130] > Change: 2023-10-02 11:14:37.675294040 +0000
	I1002 11:14:37.771172  355913 command_runner.go:130] >  Birth: -
	I1002 11:14:37.771266  355913 start.go:537] Will wait 60s for crictl version
	I1002 11:14:37.771327  355913 ssh_runner.go:195] Run: which crictl
	I1002 11:14:37.775161  355913 command_runner.go:130] > /usr/bin/crictl
	I1002 11:14:37.775220  355913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:14:37.817204  355913 command_runner.go:130] > Version:  0.1.0
	I1002 11:14:37.817233  355913 command_runner.go:130] > RuntimeName:  cri-o
	I1002 11:14:37.817242  355913 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1002 11:14:37.817250  355913 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 11:14:37.817374  355913 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:14:37.817464  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:14:37.867819  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:14:37.867842  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:14:37.867849  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:14:37.867853  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:14:37.867859  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:14:37.867864  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:14:37.867868  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:14:37.867872  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:14:37.867878  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:14:37.867885  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:14:37.867891  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:14:37.867898  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:14:37.869123  355913 ssh_runner.go:195] Run: crio --version
	I1002 11:14:37.917800  355913 command_runner.go:130] > crio version 1.24.1
	I1002 11:14:37.917838  355913 command_runner.go:130] > Version:          1.24.1
	I1002 11:14:37.917845  355913 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1002 11:14:37.917849  355913 command_runner.go:130] > GitTreeState:     dirty
	I1002 11:14:37.917856  355913 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1002 11:14:37.917860  355913 command_runner.go:130] > GoVersion:        go1.19.9
	I1002 11:14:37.917866  355913 command_runner.go:130] > Compiler:         gc
	I1002 11:14:37.917871  355913 command_runner.go:130] > Platform:         linux/amd64
	I1002 11:14:37.917877  355913 command_runner.go:130] > Linkmode:         dynamic
	I1002 11:14:37.917885  355913 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 11:14:37.917891  355913 command_runner.go:130] > SeccompEnabled:   true
	I1002 11:14:37.917895  355913 command_runner.go:130] > AppArmorEnabled:  false
	I1002 11:14:37.921405  355913 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:14:37.922992  355913 out.go:177]   - env NO_PROXY=192.168.39.165
	I1002 11:14:37.924597  355913 out.go:177]   - env NO_PROXY=192.168.39.165,192.168.39.135
	I1002 11:14:37.926144  355913 main.go:141] libmachine: (multinode-224116-m03) Calling .GetIP
	I1002 11:14:37.928867  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:37.929332  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:46:6a", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:03:03 +0000 UTC Type:0 Mac:52:54:00:31:46:6a Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-224116-m03 Clientid:01:52:54:00:31:46:6a}
	I1002 11:14:37.929367  355913 main.go:141] libmachine: (multinode-224116-m03) DBG | domain multinode-224116-m03 has defined IP address 192.168.39.195 and MAC address 52:54:00:31:46:6a in network mk-multinode-224116
	I1002 11:14:37.929565  355913 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:14:37.935296  355913 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1002 11:14:37.935562  355913 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116 for IP: 192.168.39.195
	I1002 11:14:37.935594  355913 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:14:37.935795  355913 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:14:37.935837  355913 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:14:37.935851  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 11:14:37.935867  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 11:14:37.935881  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 11:14:37.935893  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 11:14:37.935946  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:14:37.935977  355913 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:14:37.935987  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:14:37.936027  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:14:37.936058  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:14:37.936080  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:14:37.936134  355913 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:14:37.936163  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:14:37.936176  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem -> /usr/share/ca-certificates/339865.pem
	I1002 11:14:37.936190  355913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> /usr/share/ca-certificates/3398652.pem
	I1002 11:14:37.936660  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:14:37.961648  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:14:37.984766  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:14:38.008439  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:14:38.030916  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:14:38.053943  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:14:38.075866  355913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:14:38.097730  355913 ssh_runner.go:195] Run: openssl version
	I1002 11:14:38.103828  355913 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 11:14:38.103897  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:14:38.114513  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:14:38.119009  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:14:38.119206  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:14:38.119261  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:14:38.124559  355913 command_runner.go:130] > b5213941
	I1002 11:14:38.124818  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:14:38.134042  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:14:38.145015  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:14:38.149960  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:14:38.150234  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:14:38.150291  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:14:38.155668  355913 command_runner.go:130] > 51391683
	I1002 11:14:38.155898  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:14:38.165562  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:14:38.176314  355913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:14:38.180664  355913 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:14:38.180854  355913 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:14:38.180909  355913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:14:38.186677  355913 command_runner.go:130] > 3ec20f2e
	I1002 11:14:38.186884  355913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:14:38.195957  355913 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:14:38.199875  355913 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:14:38.199911  355913 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 11:14:38.199990  355913 ssh_runner.go:195] Run: crio config
	I1002 11:14:38.258522  355913 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 11:14:38.258546  355913 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 11:14:38.258552  355913 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 11:14:38.258556  355913 command_runner.go:130] > #
	I1002 11:14:38.258563  355913 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 11:14:38.258569  355913 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 11:14:38.258583  355913 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 11:14:38.258595  355913 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 11:14:38.258601  355913 command_runner.go:130] > # reload'.
	I1002 11:14:38.258611  355913 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 11:14:38.258629  355913 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 11:14:38.258644  355913 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 11:14:38.258657  355913 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 11:14:38.258666  355913 command_runner.go:130] > [crio]
	I1002 11:14:38.258676  355913 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 11:14:38.258693  355913 command_runner.go:130] > # containers images, in this directory.
	I1002 11:14:38.258728  355913 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1002 11:14:38.258747  355913 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 11:14:38.259357  355913 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1002 11:14:38.259379  355913 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 11:14:38.259390  355913 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 11:14:38.259557  355913 command_runner.go:130] > storage_driver = "overlay"
	I1002 11:14:38.259575  355913 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 11:14:38.259584  355913 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 11:14:38.259589  355913 command_runner.go:130] > storage_option = [
	I1002 11:14:38.259881  355913 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1002 11:14:38.260196  355913 command_runner.go:130] > ]
	I1002 11:14:38.260210  355913 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 11:14:38.260216  355913 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 11:14:38.260288  355913 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 11:14:38.260301  355913 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 11:14:38.260307  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 11:14:38.260312  355913 command_runner.go:130] > # always happen on a node reboot
	I1002 11:14:38.260720  355913 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 11:14:38.260738  355913 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 11:14:38.260748  355913 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 11:14:38.260766  355913 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 11:14:38.261171  355913 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 11:14:38.261190  355913 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 11:14:38.261203  355913 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 11:14:38.261218  355913 command_runner.go:130] > # internal_wipe = true
	I1002 11:14:38.261227  355913 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 11:14:38.261237  355913 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 11:14:38.261243  355913 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 11:14:38.261249  355913 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 11:14:38.261256  355913 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 11:14:38.261262  355913 command_runner.go:130] > [crio.api]
	I1002 11:14:38.261268  355913 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 11:14:38.261274  355913 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 11:14:38.261280  355913 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 11:14:38.261287  355913 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 11:14:38.261299  355913 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 11:14:38.261307  355913 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 11:14:38.261312  355913 command_runner.go:130] > # stream_port = "0"
	I1002 11:14:38.261321  355913 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 11:14:38.261326  355913 command_runner.go:130] > # stream_enable_tls = false
	I1002 11:14:38.261332  355913 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 11:14:38.261337  355913 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 11:14:38.261346  355913 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 11:14:38.261352  355913 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 11:14:38.261358  355913 command_runner.go:130] > # minutes.
	I1002 11:14:38.261887  355913 command_runner.go:130] > # stream_tls_cert = ""
	I1002 11:14:38.261900  355913 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 11:14:38.261907  355913 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 11:14:38.261913  355913 command_runner.go:130] > # stream_tls_key = ""
	I1002 11:14:38.261919  355913 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 11:14:38.261925  355913 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 11:14:38.261931  355913 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 11:14:38.261935  355913 command_runner.go:130] > # stream_tls_ca = ""
	I1002 11:14:38.261944  355913 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:14:38.261958  355913 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1002 11:14:38.261969  355913 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 11:14:38.261981  355913 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1002 11:14:38.262002  355913 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 11:14:38.262014  355913 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 11:14:38.262018  355913 command_runner.go:130] > [crio.runtime]
	I1002 11:14:38.262024  355913 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 11:14:38.262030  355913 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 11:14:38.262040  355913 command_runner.go:130] > # "nofile=1024:2048"
	I1002 11:14:38.262050  355913 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 11:14:38.262060  355913 command_runner.go:130] > # default_ulimits = [
	I1002 11:14:38.262069  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262081  355913 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 11:14:38.262091  355913 command_runner.go:130] > # no_pivot = false
	I1002 11:14:38.262100  355913 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 11:14:38.262112  355913 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 11:14:38.262121  355913 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 11:14:38.262139  355913 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 11:14:38.262150  355913 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 11:14:38.262164  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:14:38.262175  355913 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1002 11:14:38.262182  355913 command_runner.go:130] > # Cgroup setting for conmon
	I1002 11:14:38.262196  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 11:14:38.262207  355913 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 11:14:38.262218  355913 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 11:14:38.262227  355913 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 11:14:38.262235  355913 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 11:14:38.262241  355913 command_runner.go:130] > conmon_env = [
	I1002 11:14:38.262247  355913 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1002 11:14:38.262253  355913 command_runner.go:130] > ]
	I1002 11:14:38.262259  355913 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 11:14:38.262264  355913 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 11:14:38.262273  355913 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 11:14:38.262277  355913 command_runner.go:130] > # default_env = [
	I1002 11:14:38.262280  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262286  355913 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 11:14:38.262293  355913 command_runner.go:130] > # selinux = false
	I1002 11:14:38.262299  355913 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 11:14:38.262310  355913 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 11:14:38.262318  355913 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 11:14:38.262322  355913 command_runner.go:130] > # seccomp_profile = ""
	I1002 11:14:38.262330  355913 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 11:14:38.262336  355913 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 11:14:38.262387  355913 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 11:14:38.262400  355913 command_runner.go:130] > # which might increase security.
	I1002 11:14:38.262408  355913 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1002 11:14:38.262418  355913 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 11:14:38.262429  355913 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 11:14:38.262435  355913 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 11:14:38.262442  355913 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 11:14:38.262447  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:14:38.262454  355913 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 11:14:38.262460  355913 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 11:14:38.262468  355913 command_runner.go:130] > # the cgroup blockio controller.
	I1002 11:14:38.262472  355913 command_runner.go:130] > # blockio_config_file = ""
	I1002 11:14:38.262479  355913 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 11:14:38.262486  355913 command_runner.go:130] > # irqbalance daemon.
	I1002 11:14:38.262491  355913 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 11:14:38.262498  355913 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 11:14:38.262503  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:14:38.262507  355913 command_runner.go:130] > # rdt_config_file = ""
	I1002 11:14:38.262514  355913 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 11:14:38.262519  355913 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 11:14:38.262527  355913 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 11:14:38.262531  355913 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 11:14:38.262540  355913 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 11:14:38.262547  355913 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 11:14:38.262554  355913 command_runner.go:130] > # will be added.
	I1002 11:14:38.262558  355913 command_runner.go:130] > # default_capabilities = [
	I1002 11:14:38.262562  355913 command_runner.go:130] > # 	"CHOWN",
	I1002 11:14:38.262565  355913 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 11:14:38.262571  355913 command_runner.go:130] > # 	"FSETID",
	I1002 11:14:38.262575  355913 command_runner.go:130] > # 	"FOWNER",
	I1002 11:14:38.262582  355913 command_runner.go:130] > # 	"SETGID",
	I1002 11:14:38.262586  355913 command_runner.go:130] > # 	"SETUID",
	I1002 11:14:38.262590  355913 command_runner.go:130] > # 	"SETPCAP",
	I1002 11:14:38.262596  355913 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 11:14:38.262599  355913 command_runner.go:130] > # 	"KILL",
	I1002 11:14:38.262603  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262611  355913 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 11:14:38.262617  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:14:38.262628  355913 command_runner.go:130] > # default_sysctls = [
	I1002 11:14:38.262631  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262636  355913 command_runner.go:130] > # List of devices on the host that a
	I1002 11:14:38.262649  355913 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 11:14:38.262653  355913 command_runner.go:130] > # allowed_devices = [
	I1002 11:14:38.262658  355913 command_runner.go:130] > # 	"/dev/fuse",
	I1002 11:14:38.262661  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262666  355913 command_runner.go:130] > # List of additional devices. specified as
	I1002 11:14:38.262674  355913 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 11:14:38.262681  355913 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 11:14:38.262698  355913 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 11:14:38.262705  355913 command_runner.go:130] > # additional_devices = [
	I1002 11:14:38.262709  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262714  355913 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 11:14:38.262720  355913 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 11:14:38.262724  355913 command_runner.go:130] > # 	"/etc/cdi",
	I1002 11:14:38.262730  355913 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 11:14:38.262736  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262748  355913 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 11:14:38.262756  355913 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 11:14:38.262764  355913 command_runner.go:130] > # Defaults to false.
	I1002 11:14:38.262775  355913 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 11:14:38.262784  355913 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 11:14:38.262797  355913 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 11:14:38.262806  355913 command_runner.go:130] > # hooks_dir = [
	I1002 11:14:38.262813  355913 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 11:14:38.262835  355913 command_runner.go:130] > # ]
	I1002 11:14:38.262848  355913 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 11:14:38.262859  355913 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 11:14:38.262871  355913 command_runner.go:130] > # its default mounts from the following two files:
	I1002 11:14:38.262876  355913 command_runner.go:130] > #
	I1002 11:14:38.262890  355913 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 11:14:38.262904  355913 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 11:14:38.262916  355913 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 11:14:38.262922  355913 command_runner.go:130] > #
	I1002 11:14:38.262931  355913 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 11:14:38.262944  355913 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 11:14:38.262957  355913 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 11:14:38.262968  355913 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 11:14:38.262974  355913 command_runner.go:130] > #
	I1002 11:14:38.262982  355913 command_runner.go:130] > # default_mounts_file = ""
	I1002 11:14:38.262989  355913 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 11:14:38.263002  355913 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 11:14:38.263010  355913 command_runner.go:130] > pids_limit = 1024
	I1002 11:14:38.263020  355913 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 11:14:38.263031  355913 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 11:14:38.263043  355913 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 11:14:38.263057  355913 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 11:14:38.263066  355913 command_runner.go:130] > # log_size_max = -1
	I1002 11:14:38.263076  355913 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 11:14:38.263085  355913 command_runner.go:130] > # log_to_journald = false
	I1002 11:14:38.263121  355913 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 11:14:38.263133  355913 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 11:14:38.263143  355913 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 11:14:38.263168  355913 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 11:14:38.263181  355913 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 11:14:38.263344  355913 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 11:14:38.263364  355913 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 11:14:38.263470  355913 command_runner.go:130] > # read_only = false
	I1002 11:14:38.263485  355913 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 11:14:38.263495  355913 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 11:14:38.263507  355913 command_runner.go:130] > # live configuration reload.
	I1002 11:14:38.263595  355913 command_runner.go:130] > # log_level = "info"
	I1002 11:14:38.263612  355913 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 11:14:38.263621  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:14:38.263633  355913 command_runner.go:130] > # log_filter = ""
	I1002 11:14:38.263645  355913 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 11:14:38.263659  355913 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 11:14:38.263666  355913 command_runner.go:130] > # separated by comma.
	I1002 11:14:38.263677  355913 command_runner.go:130] > # uid_mappings = ""
	I1002 11:14:38.263689  355913 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 11:14:38.263704  355913 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 11:14:38.263713  355913 command_runner.go:130] > # separated by comma.
	I1002 11:14:38.263719  355913 command_runner.go:130] > # gid_mappings = ""
	I1002 11:14:38.263729  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 11:14:38.263743  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:14:38.263756  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:14:38.263772  355913 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 11:14:38.263786  355913 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 11:14:38.263801  355913 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 11:14:38.263818  355913 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 11:14:38.263853  355913 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 11:14:38.263868  355913 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 11:14:38.263879  355913 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 11:14:38.263893  355913 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 11:14:38.263904  355913 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 11:14:38.263915  355913 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 11:14:38.263925  355913 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 11:14:38.263936  355913 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 11:14:38.263948  355913 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 11:14:38.263955  355913 command_runner.go:130] > drop_infra_ctr = false
	I1002 11:14:38.263968  355913 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 11:14:38.263980  355913 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 11:14:38.263996  355913 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 11:14:38.264006  355913 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 11:14:38.264019  355913 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 11:14:38.264031  355913 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 11:14:38.264040  355913 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 11:14:38.264052  355913 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 11:14:38.264062  355913 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1002 11:14:38.264077  355913 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 11:14:38.264094  355913 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 11:14:38.264108  355913 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 11:14:38.264119  355913 command_runner.go:130] > # default_runtime = "runc"
	I1002 11:14:38.264130  355913 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 11:14:38.264144  355913 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 11:14:38.264157  355913 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 11:14:38.264169  355913 command_runner.go:130] > # creation as a file is not desired either.
	I1002 11:14:38.264186  355913 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 11:14:38.264198  355913 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 11:14:38.264209  355913 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 11:14:38.264215  355913 command_runner.go:130] > # ]
	I1002 11:14:38.264227  355913 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 11:14:38.264241  355913 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 11:14:38.264255  355913 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 11:14:38.264272  355913 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 11:14:38.264284  355913 command_runner.go:130] > #
	I1002 11:14:38.264293  355913 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 11:14:38.264305  355913 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 11:14:38.264315  355913 command_runner.go:130] > #  runtime_type = "oci"
	I1002 11:14:38.264326  355913 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 11:14:38.264337  355913 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 11:14:38.264345  355913 command_runner.go:130] > #  allowed_annotations = []
	I1002 11:14:38.264353  355913 command_runner.go:130] > # Where:
	I1002 11:14:38.264361  355913 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 11:14:38.264374  355913 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 11:14:38.264418  355913 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 11:14:38.264432  355913 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 11:14:38.264442  355913 command_runner.go:130] > #   in $PATH.
	I1002 11:14:38.264453  355913 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 11:14:38.264465  355913 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 11:14:38.264479  355913 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 11:14:38.264486  355913 command_runner.go:130] > #   state.
	I1002 11:14:38.264500  355913 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 11:14:38.264513  355913 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 11:14:38.264526  355913 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 11:14:38.264539  355913 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 11:14:38.264553  355913 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 11:14:38.264567  355913 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 11:14:38.264579  355913 command_runner.go:130] > #   The currently recognized values are:
	I1002 11:14:38.264594  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 11:14:38.264604  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 11:14:38.264617  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 11:14:38.264631  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 11:14:38.264644  355913 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 11:14:38.264659  355913 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 11:14:38.264672  355913 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 11:14:38.264686  355913 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 11:14:38.264698  355913 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 11:14:38.264709  355913 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 11:14:38.264720  355913 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1002 11:14:38.264730  355913 command_runner.go:130] > runtime_type = "oci"
	I1002 11:14:38.264741  355913 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 11:14:38.264751  355913 command_runner.go:130] > runtime_config_path = ""
	I1002 11:14:38.264758  355913 command_runner.go:130] > monitor_path = ""
	I1002 11:14:38.264766  355913 command_runner.go:130] > monitor_cgroup = ""
	I1002 11:14:38.264771  355913 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 11:14:38.264782  355913 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 11:14:38.264792  355913 command_runner.go:130] > # running containers
	I1002 11:14:38.264802  355913 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 11:14:38.264816  355913 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 11:14:38.264850  355913 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 11:14:38.264864  355913 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 11:14:38.264873  355913 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 11:14:38.264880  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 11:14:38.264889  355913 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 11:14:38.264901  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 11:14:38.264909  355913 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 11:14:38.264920  355913 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 11:14:38.264932  355913 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 11:14:38.264945  355913 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 11:14:38.264958  355913 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 11:14:38.264974  355913 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 11:14:38.264990  355913 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 11:14:38.265003  355913 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 11:14:38.265019  355913 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 11:14:38.265036  355913 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 11:14:38.265049  355913 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 11:14:38.265062  355913 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 11:14:38.265071  355913 command_runner.go:130] > # Example:
	I1002 11:14:38.265080  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 11:14:38.265097  355913 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 11:14:38.265110  355913 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 11:14:38.265122  355913 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 11:14:38.265133  355913 command_runner.go:130] > # cpuset = 0
	I1002 11:14:38.265144  355913 command_runner.go:130] > # cpushares = "0-1"
	I1002 11:14:38.265152  355913 command_runner.go:130] > # Where:
	I1002 11:14:38.265161  355913 command_runner.go:130] > # The workload name is workload-type.
	I1002 11:14:38.265178  355913 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 11:14:38.265191  355913 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 11:14:38.265205  355913 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 11:14:38.265222  355913 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 11:14:38.265237  355913 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 11:14:38.265247  355913 command_runner.go:130] > # 
	I1002 11:14:38.265283  355913 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 11:14:38.265291  355913 command_runner.go:130] > #
	I1002 11:14:38.265302  355913 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 11:14:38.265317  355913 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 11:14:38.265332  355913 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 11:14:38.265346  355913 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 11:14:38.265359  355913 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 11:14:38.265369  355913 command_runner.go:130] > [crio.image]
	I1002 11:14:38.265383  355913 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 11:14:38.265394  355913 command_runner.go:130] > # default_transport = "docker://"
	I1002 11:14:38.265405  355913 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 11:14:38.265420  355913 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:14:38.265431  355913 command_runner.go:130] > # global_auth_file = ""
	I1002 11:14:38.265441  355913 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 11:14:38.265453  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:14:38.265465  355913 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 11:14:38.265480  355913 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 11:14:38.265493  355913 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 11:14:38.265505  355913 command_runner.go:130] > # This option supports live configuration reload.
	I1002 11:14:38.265515  355913 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 11:14:38.265527  355913 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 11:14:38.265541  355913 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 11:14:38.265555  355913 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 11:14:38.265569  355913 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 11:14:38.265579  355913 command_runner.go:130] > # pause_command = "/pause"
	I1002 11:14:38.265591  355913 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 11:14:38.265605  355913 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 11:14:38.265619  355913 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 11:14:38.265635  355913 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 11:14:38.265648  355913 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 11:14:38.265659  355913 command_runner.go:130] > # signature_policy = ""
	I1002 11:14:38.265672  355913 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 11:14:38.265684  355913 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 11:14:38.265695  355913 command_runner.go:130] > # changing them here.
	I1002 11:14:38.265704  355913 command_runner.go:130] > # insecure_registries = [
	I1002 11:14:38.265713  355913 command_runner.go:130] > # ]
	I1002 11:14:38.265727  355913 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 11:14:38.265739  355913 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 11:14:38.265751  355913 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 11:14:38.265763  355913 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 11:14:38.265771  355913 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 11:14:38.265785  355913 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 11:14:38.265795  355913 command_runner.go:130] > # CNI plugins.
	I1002 11:14:38.265806  355913 command_runner.go:130] > [crio.network]
	I1002 11:14:38.265818  355913 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 11:14:38.265831  355913 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 11:14:38.265842  355913 command_runner.go:130] > # cni_default_network = ""
	I1002 11:14:38.265853  355913 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 11:14:38.265865  355913 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 11:14:38.265878  355913 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 11:14:38.265889  355913 command_runner.go:130] > # plugin_dirs = [
	I1002 11:14:38.265899  355913 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 11:14:38.265908  355913 command_runner.go:130] > # ]
	I1002 11:14:38.265919  355913 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 11:14:38.265928  355913 command_runner.go:130] > [crio.metrics]
	I1002 11:14:38.265939  355913 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 11:14:38.265950  355913 command_runner.go:130] > enable_metrics = true
	I1002 11:14:38.265959  355913 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 11:14:38.265971  355913 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 11:14:38.265985  355913 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 11:14:38.266000  355913 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 11:14:38.266013  355913 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 11:14:38.266024  355913 command_runner.go:130] > # metrics_collectors = [
	I1002 11:14:38.266033  355913 command_runner.go:130] > # 	"operations",
	I1002 11:14:38.266046  355913 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 11:14:38.266058  355913 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 11:14:38.266069  355913 command_runner.go:130] > # 	"operations_errors",
	I1002 11:14:38.266080  355913 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 11:14:38.266095  355913 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 11:14:38.266104  355913 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 11:14:38.266115  355913 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 11:14:38.266123  355913 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 11:14:38.266134  355913 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 11:14:38.266142  355913 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 11:14:38.266154  355913 command_runner.go:130] > # 	"containers_oom_total",
	I1002 11:14:38.266164  355913 command_runner.go:130] > # 	"containers_oom",
	I1002 11:14:38.266173  355913 command_runner.go:130] > # 	"processes_defunct",
	I1002 11:14:38.266183  355913 command_runner.go:130] > # 	"operations_total",
	I1002 11:14:38.266192  355913 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 11:14:38.266210  355913 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 11:14:38.266227  355913 command_runner.go:130] > # 	"operations_errors_total",
	I1002 11:14:38.266238  355913 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 11:14:38.266248  355913 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 11:14:38.266264  355913 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 11:14:38.266274  355913 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 11:14:38.266283  355913 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 11:14:38.266294  355913 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 11:14:38.266308  355913 command_runner.go:130] > # ]
	I1002 11:14:38.266320  355913 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 11:14:38.266330  355913 command_runner.go:130] > # metrics_port = 9090
	I1002 11:14:38.266345  355913 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 11:14:38.266371  355913 command_runner.go:130] > # metrics_socket = ""
	I1002 11:14:38.266406  355913 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 11:14:38.266420  355913 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 11:14:38.266435  355913 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 11:14:38.266447  355913 command_runner.go:130] > # certificate on any modification event.
	I1002 11:14:38.266458  355913 command_runner.go:130] > # metrics_cert = ""
	I1002 11:14:38.266470  355913 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 11:14:38.266482  355913 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 11:14:38.266489  355913 command_runner.go:130] > # metrics_key = ""
	I1002 11:14:38.266502  355913 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 11:14:38.266512  355913 command_runner.go:130] > [crio.tracing]
	I1002 11:14:38.266528  355913 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 11:14:38.266539  355913 command_runner.go:130] > # enable_tracing = false
	I1002 11:14:38.266550  355913 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 11:14:38.266562  355913 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 11:14:38.266574  355913 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 11:14:38.266585  355913 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 11:14:38.266596  355913 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 11:14:38.266607  355913 command_runner.go:130] > [crio.stats]
	I1002 11:14:38.266621  355913 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 11:14:38.266634  355913 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 11:14:38.266646  355913 command_runner.go:130] > # stats_collection_period = 0
	I1002 11:14:38.266697  355913 command_runner.go:130] ! time="2023-10-02 11:14:38.246018977Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1002 11:14:38.266718  355913 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 11:14:38.266799  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:14:38.266811  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:14:38.266825  355913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:14:38.266855  355913 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-224116 NodeName:multinode-224116-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:14:38.267013  355913 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-224116-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:14:38.267092  355913 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-224116-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:14:38.267162  355913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:14:38.278231  355913 command_runner.go:130] > kubeadm
	I1002 11:14:38.278253  355913 command_runner.go:130] > kubectl
	I1002 11:14:38.278258  355913 command_runner.go:130] > kubelet
	I1002 11:14:38.278283  355913 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:14:38.278344  355913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 11:14:38.288427  355913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1002 11:14:38.305566  355913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:14:38.321722  355913 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I1002 11:14:38.325549  355913 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I1002 11:14:38.325956  355913 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:14:38.326268  355913 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:14:38.326308  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:14:38.326389  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:14:38.341284  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33929
	I1002 11:14:38.341763  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:14:38.342200  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:14:38.342223  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:14:38.342612  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:14:38.342849  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:14:38.342995  355913 start.go:304] JoinCluster: &{Name:multinode-224116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-224116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.135 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:14:38.343109  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 11:14:38.343129  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:14:38.345716  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:14:38.346138  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:14:38.346168  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:14:38.346299  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:14:38.346492  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:14:38.346649  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:14:38.346752  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:14:38.520043  355913 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ixreo8.6ek1a8joqjjp4k7n --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:14:38.520092  355913 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:14:38.520127  355913 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:14:38.520450  355913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:14:38.520493  355913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:14:38.535128  355913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I1002 11:14:38.535608  355913 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:14:38.536091  355913 main.go:141] libmachine: Using API Version  1
	I1002 11:14:38.536114  355913 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:14:38.536429  355913 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:14:38.536581  355913 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:14:38.536768  355913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-224116-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1002 11:14:38.536791  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:14:38.539620  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:14:38.540055  355913 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:10:34 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:14:38.540091  355913 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:14:38.540209  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:14:38.540371  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:14:38.540529  355913 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:14:38.540650  355913 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:14:38.698389  355913 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1002 11:14:38.752324  355913 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z2ps6, kube-system/kube-proxy-8tg2f
	I1002 11:14:41.776676  355913 command_runner.go:130] > node/multinode-224116-m03 cordoned
	I1002 11:14:41.776706  355913 command_runner.go:130] > pod "busybox-5bc68d56bd-nswcq" has DeletionTimestamp older than 1 seconds, skipping
	I1002 11:14:41.776712  355913 command_runner.go:130] > node/multinode-224116-m03 drained
	I1002 11:14:41.776913  355913 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-224116-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.240104917s)
	I1002 11:14:41.776953  355913 node.go:108] successfully drained node "m03"
	I1002 11:14:41.777400  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:14:41.777662  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:14:41.778014  355913 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1002 11:14:41.778102  355913 round_trippers.go:463] DELETE https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:41.778112  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:41.778124  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:41.778134  355913 round_trippers.go:473]     Content-Type: application/json
	I1002 11:14:41.778148  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:41.799428  355913 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1002 11:14:41.799459  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:41.799470  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:41.799478  355913 round_trippers.go:580]     Content-Length: 171
	I1002 11:14:41.799486  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:41 GMT
	I1002 11:14:41.799493  355913 round_trippers.go:580]     Audit-Id: 223905ac-c7fb-45c6-a45b-2a64a6464e56
	I1002 11:14:41.799508  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:41.799516  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:41.799524  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:41.799570  355913 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-224116-m03","kind":"nodes","uid":"60156cb0-4b83-40ca-ab0d-93bdf316a64a"}}
	I1002 11:14:41.799616  355913 node.go:124] successfully deleted node "m03"
	I1002 11:14:41.799629  355913 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:14:41.799656  355913 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:14:41.799681  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ixreo8.6ek1a8joqjjp4k7n --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-224116-m03"
	I1002 11:14:41.953649  355913 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 11:14:42.195006  355913 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 11:14:42.195123  355913 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 11:14:42.283273  355913 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:14:42.283306  355913 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:14:42.283316  355913 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 11:14:42.416184  355913 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 11:14:42.940953  355913 command_runner.go:130] > This node has joined the cluster:
	I1002 11:14:42.940983  355913 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 11:14:42.940990  355913 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 11:14:42.940996  355913 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 11:14:42.943825  355913 command_runner.go:130] ! W1002 11:14:41.947392    2358 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1002 11:14:42.943857  355913 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1002 11:14:42.943866  355913 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1002 11:14:42.943879  355913 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1002 11:14:42.943900  355913 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ixreo8.6ek1a8joqjjp4k7n --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-224116-m03": (1.144201238s)
	I1002 11:14:42.943924  355913 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 11:14:43.201917  355913 start.go:306] JoinCluster complete in 4.858918823s
	I1002 11:14:43.201951  355913 cni.go:84] Creating CNI manager for ""
	I1002 11:14:43.201959  355913 cni.go:136] 3 nodes found, recommending kindnet
	I1002 11:14:43.202046  355913 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 11:14:43.209449  355913 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 11:14:43.209478  355913 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 11:14:43.209488  355913 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 11:14:43.209499  355913 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 11:14:43.209508  355913 command_runner.go:130] > Access: 2023-10-02 11:10:34.846172782 +0000
	I1002 11:14:43.209517  355913 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 11:14:43.209526  355913 command_runner.go:130] > Change: 2023-10-02 11:10:33.014172782 +0000
	I1002 11:14:43.209532  355913 command_runner.go:130] >  Birth: -
	I1002 11:14:43.210030  355913 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 11:14:43.210048  355913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 11:14:43.227985  355913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 11:14:43.565054  355913 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:14:43.565086  355913 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 11:14:43.565096  355913 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 11:14:43.565104  355913 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 11:14:43.565618  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:14:43.565973  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:14:43.566433  355913 round_trippers.go:463] GET https://192.168.39.165:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 11:14:43.566447  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.566459  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.566470  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.574221  355913 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1002 11:14:43.574244  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.574254  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.574262  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.574270  355913 round_trippers.go:580]     Content-Length: 291
	I1002 11:14:43.574282  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.574294  355913 round_trippers.go:580]     Audit-Id: e1389b48-1d72-4e82-8e12-1d6246a6242d
	I1002 11:14:43.574304  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.574314  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.574614  355913 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08c5bbea-ba20-4e90-9cf5-25582be54095","resourceVersion":"860","creationTimestamp":"2023-10-02T11:00:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 11:14:43.574735  355913 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-224116" context rescaled to 1 replicas
	I1002 11:14:43.574763  355913 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1002 11:14:43.576857  355913 out.go:177] * Verifying Kubernetes components...
	I1002 11:14:43.578511  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:14:43.592708  355913 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:14:43.593042  355913 kapi.go:59] client config for multinode-224116: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/multinode-224116/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:14:43.593298  355913 node_ready.go:35] waiting up to 6m0s for node "multinode-224116-m03" to be "Ready" ...
	I1002 11:14:43.593365  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:43.593372  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.593379  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.593388  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.595646  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.595669  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.595683  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.595692  355913 round_trippers.go:580]     Audit-Id: d202a1ec-ab37-453f-9b2e-e46c5f02938d
	I1002 11:14:43.595700  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.595716  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.595724  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.595736  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.596211  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"5fde5323-98c1-4443-bbb7-b3d186927c3e","resourceVersion":"1198","creationTimestamp":"2023-10-02T11:14:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:14:43.596481  355913 node_ready.go:49] node "multinode-224116-m03" has status "Ready":"True"
	I1002 11:14:43.596495  355913 node_ready.go:38] duration metric: took 3.182716ms waiting for node "multinode-224116-m03" to be "Ready" ...
	I1002 11:14:43.596504  355913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:14:43.596561  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I1002 11:14:43.596569  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.596576  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.596583  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.600103  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:43.600118  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.600128  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.600134  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.600142  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.600150  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.600162  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.600170  355913 round_trippers.go:580]     Audit-Id: 62812b3d-4cfe-415c-aa6b-89fc4e2d8834
	I1002 11:14:43.601413  355913 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1202"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82087 chars]
	I1002 11:14:43.603890  355913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.603957  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h6gbq
	I1002 11:14:43.603967  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.603974  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.603980  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.606307  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.606321  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.606327  355913 round_trippers.go:580]     Audit-Id: 203b6a22-6e55-4d41-ad73-f39b903a8506
	I1002 11:14:43.606332  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.606337  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.606342  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.606349  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.606375  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.606609  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h6gbq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49ee2f4a-1c73-4642-bd3b-678e6cb9ef55","resourceVersion":"841","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ab600503-cc13-4422-97d4-5e83ca32d368","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab600503-cc13-4422-97d4-5e83ca32d368\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1002 11:14:43.606979  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:43.606991  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.606998  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.607004  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.609115  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.609134  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.609142  355913 round_trippers.go:580]     Audit-Id: 3b7ccd92-a103-4e9b-bf6c-3998ec6ffc53
	I1002 11:14:43.609151  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.609160  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.609168  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.609177  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.609185  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.609342  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:43.609609  355913 pod_ready.go:92] pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:43.609621  355913 pod_ready.go:81] duration metric: took 5.712027ms waiting for pod "coredns-5dd5756b68-h6gbq" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.609628  355913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.609668  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-224116
	I1002 11:14:43.609677  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.609683  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.609689  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.612179  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.612194  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.612200  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.612205  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.612210  355913 round_trippers.go:580]     Audit-Id: 4eec014f-b778-46e4-a1b1-fcd85c6676d6
	I1002 11:14:43.612216  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.612224  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.612229  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.612614  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-224116","namespace":"kube-system","uid":"5accde9f-e62c-422f-aaa1-ddf4f8f0da05","resourceVersion":"835","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.165:2379","kubernetes.io/config.hash":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.mirror":"6042e2dc777b7ecb1e5f00a006739c52","kubernetes.io/config.seen":"2023-10-02T11:00:31.044390279Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1002 11:14:43.612984  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:43.612997  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.613005  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.613014  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.615397  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.615415  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.615423  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.615431  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.615439  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.615447  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.615453  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.615459  355913 round_trippers.go:580]     Audit-Id: 272fbddb-022c-47a2-ae56-f71a7aed198f
	I1002 11:14:43.615703  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:43.615974  355913 pod_ready.go:92] pod "etcd-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:43.615986  355913 pod_ready.go:81] duration metric: took 6.353289ms waiting for pod "etcd-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.616001  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.616045  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-224116
	I1002 11:14:43.616052  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.616059  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.616065  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.619969  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:43.619987  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.619996  355913 round_trippers.go:580]     Audit-Id: 082741ed-372e-4d96-bfd1-e2c9065ffe05
	I1002 11:14:43.620004  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.620012  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.620025  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.620038  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.620050  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.620305  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-224116","namespace":"kube-system","uid":"26841310-e8b5-409e-8915-888db5e257ab","resourceVersion":"862","creationTimestamp":"2023-10-02T11:00:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.165:8443","kubernetes.io/config.hash":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.mirror":"11cc08b65180f58db5ea8ca677f3032f","kubernetes.io/config.seen":"2023-10-02T11:00:31.044391274Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1002 11:14:43.620656  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:43.620670  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.620680  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.620688  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.622979  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.622997  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.623007  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.623015  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.623023  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.623035  355913 round_trippers.go:580]     Audit-Id: 22769597-501a-4b82-a789-c4a0c0e431c6
	I1002 11:14:43.623046  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.623054  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.623212  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:43.623529  355913 pod_ready.go:92] pod "kube-apiserver-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:43.623544  355913 pod_ready.go:81] duration metric: took 7.530872ms waiting for pod "kube-apiserver-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.623555  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.623613  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-224116
	I1002 11:14:43.623622  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.623628  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.623634  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.626617  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.626632  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.626640  355913 round_trippers.go:580]     Audit-Id: 83287152-3749-4d17-a525-f13f97db432d
	I1002 11:14:43.626649  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.626658  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.626671  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.626683  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.626696  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.628135  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-224116","namespace":"kube-system","uid":"7d71d06a-a323-41ce-a7a4-c7d33880f9fa","resourceVersion":"832","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.mirror":"3bee2fdb49df38e62a3033b15b9a59ad","kubernetes.io/config.seen":"2023-10-02T11:00:39.980801936Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1002 11:14:43.628484  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:43.628497  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.628507  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.628516  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.631184  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:43.631198  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.631204  355913 round_trippers.go:580]     Audit-Id: c0eaf2c2-7b14-4f83-b4c1-a2b2f3a8a411
	I1002 11:14:43.631209  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.631221  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.631232  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.631242  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.631254  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.631455  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:43.631751  355913 pod_ready.go:92] pod "kube-controller-manager-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:43.631765  355913 pod_ready.go:81] duration metric: took 8.198074ms waiting for pod "kube-controller-manager-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.631773  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:43.794171  355913 request.go:629] Waited for 162.31446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:14:43.794243  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:14:43.794251  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.794263  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.794280  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.797356  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:43.797375  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.797382  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.797388  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.797393  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.797398  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.797409  355913 round_trippers.go:580]     Audit-Id: 80e8c7fe-9873-4ded-8dcb-095c714d7b2e
	I1002 11:14:43.797420  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.797559  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"1169","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1002 11:14:43.994377  355913 request.go:629] Waited for 196.354652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:43.994465  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:43.994476  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:43.994489  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:43.994502  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:43.997603  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:43.997622  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:43.997630  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:43 GMT
	I1002 11:14:43.997639  355913 round_trippers.go:580]     Audit-Id: 71710443-2281-40f6-87e8-375647b59245
	I1002 11:14:43.997646  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:43.997652  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:43.997660  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:43.997669  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:43.998035  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"5fde5323-98c1-4443-bbb7-b3d186927c3e","resourceVersion":"1198","creationTimestamp":"2023-10-02T11:14:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:14:44.194322  355913 request.go:629] Waited for 195.929894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:14:44.194390  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:14:44.194395  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:44.194404  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:44.194412  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:44.197610  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:44.197630  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:44.197637  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:44.197642  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:44.197647  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:44 GMT
	I1002 11:14:44.197652  355913 round_trippers.go:580]     Audit-Id: 7ea0c213-a703-4055-bdbc-604cafa273d4
	I1002 11:14:44.197657  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:44.197662  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:44.198023  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"1169","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1002 11:14:44.393731  355913 request.go:629] Waited for 195.109597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:44.393813  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:44.393818  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:44.393825  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:44.393831  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:44.397996  355913 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 11:14:44.398020  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:44.398034  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:44.398045  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:44.398058  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:44.398069  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:44 GMT
	I1002 11:14:44.398077  355913 round_trippers.go:580]     Audit-Id: e2c651cb-7aa2-487c-a19e-dc6992b5857a
	I1002 11:14:44.398085  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:44.398234  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"5fde5323-98c1-4443-bbb7-b3d186927c3e","resourceVersion":"1198","creationTimestamp":"2023-10-02T11:14:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:14:44.899320  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8tg2f
	I1002 11:14:44.899342  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:44.899351  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:44.899357  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:44.902292  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:44.902312  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:44.902321  355913 round_trippers.go:580]     Audit-Id: e323aaf9-c072-45ea-b6ac-dc95b2d4dc97
	I1002 11:14:44.902330  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:44.902340  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:44.902348  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:44.902372  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:44.902382  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:44 GMT
	I1002 11:14:44.902483  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8tg2f","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd300e3b-222c-43bb-9997-2d1bddbc8e94","resourceVersion":"1210","creationTimestamp":"2023-10-02T11:02:28Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:02:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1002 11:14:44.902911  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m03
	I1002 11:14:44.902924  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:44.902931  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:44.902937  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:44.905431  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:44.905446  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:44.905453  355913 round_trippers.go:580]     Audit-Id: 39b84d18-6b0b-4ec3-a838-27b4a7e390fa
	I1002 11:14:44.905458  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:44.905463  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:44.905468  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:44.905473  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:44.905478  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:44 GMT
	I1002 11:14:44.906141  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m03","uid":"5fde5323-98c1-4443-bbb7-b3d186927c3e","resourceVersion":"1198","creationTimestamp":"2023-10-02T11:14:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:14:42Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:14:44.906488  355913 pod_ready.go:92] pod "kube-proxy-8tg2f" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:44.906513  355913 pod_ready.go:81] duration metric: took 1.274730366s waiting for pod "kube-proxy-8tg2f" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:44.906524  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:44.993841  355913 request.go:629] Waited for 87.244413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:14:44.993928  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nshcj
	I1002 11:14:44.993938  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:44.993950  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:44.993960  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:45.001224  355913 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1002 11:14:45.001248  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:45.001255  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:45.001261  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:45.001266  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:44 GMT
	I1002 11:14:45.001271  355913 round_trippers.go:580]     Audit-Id: e83f2178-2145-4243-ab7d-b8d96203e036
	I1002 11:14:45.001276  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:45.001282  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:45.001749  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nshcj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3def928-5e43-4f7e-8ae2-3c0daafd0003","resourceVersion":"800","creationTimestamp":"2023-10-02T11:00:52Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 11:14:45.193478  355913 request.go:629] Waited for 191.28459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:45.193564  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:45.193573  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:45.193582  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:45.193595  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:45.196608  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:45.196628  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:45.196635  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:45.196640  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:45.196645  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:45 GMT
	I1002 11:14:45.196652  355913 round_trippers.go:580]     Audit-Id: 49f53b15-b6b7-4433-8e59-02d1ed2faa36
	I1002 11:14:45.196660  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:45.196668  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:45.197030  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:45.197460  355913 pod_ready.go:92] pod "kube-proxy-nshcj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:45.197478  355913 pod_ready.go:81] duration metric: took 290.940126ms waiting for pod "kube-proxy-nshcj" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:45.197490  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:45.393880  355913 request.go:629] Waited for 196.283344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:14:45.393960  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rdt77
	I1002 11:14:45.393968  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:45.393979  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:45.393990  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:45.397124  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:45.397145  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:45.397155  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:45.397163  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:45.397171  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:45.397178  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:45 GMT
	I1002 11:14:45.397186  355913 round_trippers.go:580]     Audit-Id: 00160a95-14f1-4b29-bcd2-24bfc800c65f
	I1002 11:14:45.397193  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:45.397612  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rdt77","generateName":"kube-proxy-","namespace":"kube-system","uid":"96482fa7-e7e4-4375-b3b6-cc24f41d4bcf","resourceVersion":"1024","creationTimestamp":"2023-10-02T11:01:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3580e93b-6724-4c5c-baca-8e1b963cda9a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:01:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3580e93b-6724-4c5c-baca-8e1b963cda9a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1002 11:14:45.594429  355913 request.go:629] Waited for 196.396951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:14:45.594498  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116-m02
	I1002 11:14:45.594502  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:45.594510  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:45.594516  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:45.597235  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:45.597255  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:45.597262  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:45.597268  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:45.597277  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:45 GMT
	I1002 11:14:45.597292  355913 round_trippers.go:580]     Audit-Id: b8b9590d-8d1c-432d-8da2-af7170831a6c
	I1002 11:14:45.597301  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:45.597312  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:45.597535  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116-m02","uid":"e1be40d7-fc74-480d-ac71-0bbc41a5beee","resourceVersion":"1003","creationTimestamp":"2023-10-02T11:12:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:12:59Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1002 11:14:45.597804  355913 pod_ready.go:92] pod "kube-proxy-rdt77" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:45.597816  355913 pod_ready.go:81] duration metric: took 400.313189ms waiting for pod "kube-proxy-rdt77" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:45.597825  355913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:45.794271  355913 request.go:629] Waited for 196.353556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:14:45.794348  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-224116
	I1002 11:14:45.794370  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:45.794383  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:45.794394  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:45.797400  355913 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 11:14:45.797420  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:45.797427  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:45.797432  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:45.797438  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:45.797445  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:45.797456  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:45 GMT
	I1002 11:14:45.797476  355913 round_trippers.go:580]     Audit-Id: b36f29e0-220b-4d78-a6ca-0050fa1aaa29
	I1002 11:14:45.797686  355913 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-224116","namespace":"kube-system","uid":"66f95d23-f489-423f-9008-a7cf03a9ee55","resourceVersion":"834","creationTimestamp":"2023-10-02T11:00:40Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.mirror":"9bd8dc11ca1ef87923294a95bf3b31e7","kubernetes.io/config.seen":"2023-10-02T11:00:39.980802889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T11:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1002 11:14:45.994433  355913 request.go:629] Waited for 196.327432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:45.994509  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/multinode-224116
	I1002 11:14:45.994514  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:45.994522  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:45.994531  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:46.000852  355913 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1002 11:14:46.000878  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:46.000888  355913 round_trippers.go:580]     Audit-Id: 1b984c85-b1d9-4d8f-900c-294c62609348
	I1002 11:14:46.000896  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:46.000923  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:46.000933  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:46.000944  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:46.000953  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:45 GMT
	I1002 11:14:46.001242  355913 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T11:00:36Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1002 11:14:46.001584  355913 pod_ready.go:92] pod "kube-scheduler-multinode-224116" in "kube-system" namespace has status "Ready":"True"
	I1002 11:14:46.001600  355913 pod_ready.go:81] duration metric: took 403.766931ms waiting for pod "kube-scheduler-multinode-224116" in "kube-system" namespace to be "Ready" ...
	I1002 11:14:46.001614  355913 pod_ready.go:38] duration metric: took 2.405097237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:14:46.001641  355913 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:14:46.001691  355913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:14:46.015579  355913 system_svc.go:56] duration metric: took 13.928564ms WaitForService to wait for kubelet.
	I1002 11:14:46.015611  355913 kubeadm.go:581] duration metric: took 2.440820389s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:14:46.015636  355913 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:14:46.194106  355913 request.go:629] Waited for 178.388211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I1002 11:14:46.194200  355913 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I1002 11:14:46.194214  355913 round_trippers.go:469] Request Headers:
	I1002 11:14:46.194227  355913 round_trippers.go:473]     Accept: application/json, */*
	I1002 11:14:46.194239  355913 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 11:14:46.197535  355913 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 11:14:46.197557  355913 round_trippers.go:577] Response Headers:
	I1002 11:14:46.197572  355913 round_trippers.go:580]     Audit-Id: 6fd2d3f8-b2ef-4447-a2fd-b041c40c37da
	I1002 11:14:46.197581  355913 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 11:14:46.197589  355913 round_trippers.go:580]     Content-Type: application/json
	I1002 11:14:46.197596  355913 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f5cc0144-60c1-4546-aa78-bd065d85a1f5
	I1002 11:14:46.197610  355913 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 07c38015-c9ed-4dc8-b8f2-8abbf76e502f
	I1002 11:14:46.197619  355913 round_trippers.go:580]     Date: Mon, 02 Oct 2023 11:14:46 GMT
	I1002 11:14:46.198184  355913 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1213"},"items":[{"metadata":{"name":"multinode-224116","uid":"799a2d82-3858-43c5-8e99-98ae00381443","resourceVersion":"872","creationTimestamp":"2023-10-02T11:00:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-224116","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45957ed538272972541ab48cdf2c4b323d7f5c18","minikube.k8s.io/name":"multinode-224116","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T11_00_41_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I1002 11:14:46.198767  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:14:46.198787  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:14:46.198797  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:14:46.198802  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:14:46.198805  355913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:14:46.198809  355913 node_conditions.go:123] node cpu capacity is 2
	I1002 11:14:46.198814  355913 node_conditions.go:105] duration metric: took 183.171913ms to run NodePressure ...
	I1002 11:14:46.198830  355913 start.go:228] waiting for startup goroutines ...
	I1002 11:14:46.198849  355913 start.go:242] writing updated cluster config ...
	I1002 11:14:46.199128  355913 ssh_runner.go:195] Run: rm -f paused
	I1002 11:14:46.249619  355913 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:14:46.251898  355913 out.go:177] * Done! kubectl is now configured to use "multinode-224116" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:10:33 UTC, ends at Mon 2023-10-02 11:14:47 UTC. --
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.353540442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696245287353525402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e8b546dc-bbd6-4689-b61f-3259beb42367 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.354261172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3db3a3ec-ad3f-43f1-8983-297be8703347 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.354335252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3db3a3ec-ad3f-43f1-8983-297be8703347 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.354542129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6df97f9e42395a820d5a70343f82901ffd8fafbfe43345db3a392afef0e94cc,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696245097427976813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d855b81cd522442162cb25e756b075abef90df197237c6f9ab8475e6937e9f4d,PodSandboxId:c3c70a9d34409ac16679228497a44c9d4322ac531e8b7afaced4e248c3d3fb44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696245077471619300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb6914b812f422c5b657c45d46c6b9b064814b12c3c701c290006230093f3147,PodSandboxId:2ec67d2362e70d77b2fd9ddd34d7f57928d3a078acb051b791cdcbe7331db166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696245074738906818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f4a5c79d0951f473689a5ed81a24b8ef5550cee125c694fa4667a2fd34e5de5,PodSandboxId:a73bf9ef063652020957f2ac9f089d8a05f182f793c3b1a68ecd85da264be6cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696245069652600216,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679f86e7e306cc7a7a7c3ea83ddb4860226f545ec7d53aa6cc7510b1864d1f9a,PodSandboxId:60fbe7053b5cec50cdc667994f328af884999864eb12990b2a4af65ba399592b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696245067120754470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05fc599ef7c763ee4beb1e0e158d5d53f8bbd600c0c425a2f12c7533ce7ed14,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696245067146581861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b
885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5dc0540f04fa5e424d0e738e94fb099194bf0b6b1b6d14b318e1dcd4b63e4f9,PodSandboxId:0123ede69cbe707c4ea6bf2e3622017aa4cbf58d63be19c0be7994060bda2bea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696245060674581533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Annota
tions:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6df9444262f553d16c8297c6474358692752f979a9714209c68110fe7e86005,PodSandboxId:59f1c5b070b58ebfe67c1f1b15fd62452a056e0e20f9baae33b03e82a6b5251d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696245060458071637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62
a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071747514d4fcd871f3ce7fecf714054f18c1dc0f5ffbfebe4358af69561899d,PodSandboxId:dc92a1a26fb6dc5fccd8dc189c3b8d094b31fbeeee68627d418bbab7222f5eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696245060416735482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f30
32f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29426caa0c71bbc7ad7f70201b98168f0d64c6fa28a2d599c11731f14f38af23,PodSandboxId:d36bd45f06e2a8e9c804f77412cb48682ff1e8b6834d8f48d30b5764307404cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696245060176081238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fbfb7459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3db3a3ec-ad3f-43f1-8983-297be8703347 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.397806926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6a05af09-0844-4092-88db-ea1a8145ec82 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.397936549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6a05af09-0844-4092-88db-ea1a8145ec82 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.398988144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=abd37c95-bcd6-48a4-8cc8-ba137565c27c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.399518084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696245287399502441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=abd37c95-bcd6-48a4-8cc8-ba137565c27c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.400325921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a3c3609a-c52a-4e53-8383-8d137110f6fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.400399121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a3c3609a-c52a-4e53-8383-8d137110f6fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.400623873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6df97f9e42395a820d5a70343f82901ffd8fafbfe43345db3a392afef0e94cc,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696245097427976813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d855b81cd522442162cb25e756b075abef90df197237c6f9ab8475e6937e9f4d,PodSandboxId:c3c70a9d34409ac16679228497a44c9d4322ac531e8b7afaced4e248c3d3fb44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696245077471619300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb6914b812f422c5b657c45d46c6b9b064814b12c3c701c290006230093f3147,PodSandboxId:2ec67d2362e70d77b2fd9ddd34d7f57928d3a078acb051b791cdcbe7331db166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696245074738906818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f4a5c79d0951f473689a5ed81a24b8ef5550cee125c694fa4667a2fd34e5de5,PodSandboxId:a73bf9ef063652020957f2ac9f089d8a05f182f793c3b1a68ecd85da264be6cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696245069652600216,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679f86e7e306cc7a7a7c3ea83ddb4860226f545ec7d53aa6cc7510b1864d1f9a,PodSandboxId:60fbe7053b5cec50cdc667994f328af884999864eb12990b2a4af65ba399592b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696245067120754470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05fc599ef7c763ee4beb1e0e158d5d53f8bbd600c0c425a2f12c7533ce7ed14,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696245067146581861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b
885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5dc0540f04fa5e424d0e738e94fb099194bf0b6b1b6d14b318e1dcd4b63e4f9,PodSandboxId:0123ede69cbe707c4ea6bf2e3622017aa4cbf58d63be19c0be7994060bda2bea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696245060674581533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Annota
tions:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6df9444262f553d16c8297c6474358692752f979a9714209c68110fe7e86005,PodSandboxId:59f1c5b070b58ebfe67c1f1b15fd62452a056e0e20f9baae33b03e82a6b5251d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696245060458071637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62
a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071747514d4fcd871f3ce7fecf714054f18c1dc0f5ffbfebe4358af69561899d,PodSandboxId:dc92a1a26fb6dc5fccd8dc189c3b8d094b31fbeeee68627d418bbab7222f5eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696245060416735482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f30
32f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29426caa0c71bbc7ad7f70201b98168f0d64c6fa28a2d599c11731f14f38af23,PodSandboxId:d36bd45f06e2a8e9c804f77412cb48682ff1e8b6834d8f48d30b5764307404cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696245060176081238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fbfb7459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a3c3609a-c52a-4e53-8383-8d137110f6fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.441470307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=31d1ef60-9f06-4ab3-8e07-c5b0e9002c74 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.441552142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=31d1ef60-9f06-4ab3-8e07-c5b0e9002c74 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.443103312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f01bda60-8e8f-4ab2-85f6-7f334256a22d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.443633783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696245287443617482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f01bda60-8e8f-4ab2-85f6-7f334256a22d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.444287118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4cb682f9-b711-4ce7-8bb1-207449f495d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.444334075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4cb682f9-b711-4ce7-8bb1-207449f495d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.444524958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6df97f9e42395a820d5a70343f82901ffd8fafbfe43345db3a392afef0e94cc,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696245097427976813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d855b81cd522442162cb25e756b075abef90df197237c6f9ab8475e6937e9f4d,PodSandboxId:c3c70a9d34409ac16679228497a44c9d4322ac531e8b7afaced4e248c3d3fb44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696245077471619300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb6914b812f422c5b657c45d46c6b9b064814b12c3c701c290006230093f3147,PodSandboxId:2ec67d2362e70d77b2fd9ddd34d7f57928d3a078acb051b791cdcbe7331db166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696245074738906818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f4a5c79d0951f473689a5ed81a24b8ef5550cee125c694fa4667a2fd34e5de5,PodSandboxId:a73bf9ef063652020957f2ac9f089d8a05f182f793c3b1a68ecd85da264be6cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696245069652600216,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679f86e7e306cc7a7a7c3ea83ddb4860226f545ec7d53aa6cc7510b1864d1f9a,PodSandboxId:60fbe7053b5cec50cdc667994f328af884999864eb12990b2a4af65ba399592b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696245067120754470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05fc599ef7c763ee4beb1e0e158d5d53f8bbd600c0c425a2f12c7533ce7ed14,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696245067146581861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b
885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5dc0540f04fa5e424d0e738e94fb099194bf0b6b1b6d14b318e1dcd4b63e4f9,PodSandboxId:0123ede69cbe707c4ea6bf2e3622017aa4cbf58d63be19c0be7994060bda2bea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696245060674581533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Annota
tions:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6df9444262f553d16c8297c6474358692752f979a9714209c68110fe7e86005,PodSandboxId:59f1c5b070b58ebfe67c1f1b15fd62452a056e0e20f9baae33b03e82a6b5251d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696245060458071637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62
a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071747514d4fcd871f3ce7fecf714054f18c1dc0f5ffbfebe4358af69561899d,PodSandboxId:dc92a1a26fb6dc5fccd8dc189c3b8d094b31fbeeee68627d418bbab7222f5eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696245060416735482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f30
32f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29426caa0c71bbc7ad7f70201b98168f0d64c6fa28a2d599c11731f14f38af23,PodSandboxId:d36bd45f06e2a8e9c804f77412cb48682ff1e8b6834d8f48d30b5764307404cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696245060176081238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fbfb7459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4cb682f9-b711-4ce7-8bb1-207449f495d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.487898655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b4f1d4e9-dd8b-4f0f-a3a2-cb1bc0b3101d name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.487974347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b4f1d4e9-dd8b-4f0f-a3a2-cb1bc0b3101d name=/runtime.v1.RuntimeService/Version
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.490510872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=15bacfa6-693d-4126-941c-a65dd59cc4dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.490872404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696245287490861808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=15bacfa6-693d-4126-941c-a65dd59cc4dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.491489429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d3bea97-2504-4e16-aadc-bf0545850e59 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.491545484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d3bea97-2504-4e16-aadc-bf0545850e59 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:14:47 multinode-224116 crio[709]: time="2023-10-02 11:14:47.493629596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6df97f9e42395a820d5a70343f82901ffd8fafbfe43345db3a392afef0e94cc,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696245097427976813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d855b81cd522442162cb25e756b075abef90df197237c6f9ab8475e6937e9f4d,PodSandboxId:c3c70a9d34409ac16679228497a44c9d4322ac531e8b7afaced4e248c3d3fb44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696245077471619300,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-h45vs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ed1e56c2-6848-4905-995d-46cecedcabe7,},Annotations:map[string]string{io.kubernetes.container.hash: d782c78,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb6914b812f422c5b657c45d46c6b9b064814b12c3c701c290006230093f3147,PodSandboxId:2ec67d2362e70d77b2fd9ddd34d7f57928d3a078acb051b791cdcbe7331db166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696245074738906818,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h6gbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49ee2f4a-1c73-4642-bd3b-678e6cb9ef55,},Annotations:map[string]string{io.kubernetes.container.hash: 873decce,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f4a5c79d0951f473689a5ed81a24b8ef5550cee125c694fa4667a2fd34e5de5,PodSandboxId:a73bf9ef063652020957f2ac9f089d8a05f182f793c3b1a68ecd85da264be6cd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696245069652600216,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-f7m28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc1438f0-bd67-457d-9e7e-b8998a01b029,},Annotations:map[string]string{io.kubernetes.container.hash: 654e42f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679f86e7e306cc7a7a7c3ea83ddb4860226f545ec7d53aa6cc7510b1864d1f9a,PodSandboxId:60fbe7053b5cec50cdc667994f328af884999864eb12990b2a4af65ba399592b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696245067120754470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nshcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3def928-5e43-4f7e-8ae2-3c0daaf
d0003,},Annotations:map[string]string{io.kubernetes.container.hash: 44c6b935,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05fc599ef7c763ee4beb1e0e158d5d53f8bbd600c0c425a2f12c7533ce7ed14,PodSandboxId:e74af2bc59298cbc18177f4f0aa2bed9e543ae9002512e9bd9f5332103e0e3f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696245067146581861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5da043-58ea-4918-836d-19655c55b
885,},Annotations:map[string]string{io.kubernetes.container.hash: f3c6eb7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5dc0540f04fa5e424d0e738e94fb099194bf0b6b1b6d14b318e1dcd4b63e4f9,PodSandboxId:0123ede69cbe707c4ea6bf2e3622017aa4cbf58d63be19c0be7994060bda2bea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696245060674581533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bd8dc11ca1ef87923294a95bf3b31e7,},Annota
tions:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6df9444262f553d16c8297c6474358692752f979a9714209c68110fe7e86005,PodSandboxId:59f1c5b070b58ebfe67c1f1b15fd62452a056e0e20f9baae33b03e82a6b5251d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696245060458071637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bee2fdb49df38e62
a3033b15b9a59ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071747514d4fcd871f3ce7fecf714054f18c1dc0f5ffbfebe4358af69561899d,PodSandboxId:dc92a1a26fb6dc5fccd8dc189c3b8d094b31fbeeee68627d418bbab7222f5eed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696245060416735482,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11cc08b65180f58db5ea8ca677f30
32f,},Annotations:map[string]string{io.kubernetes.container.hash: b08656d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29426caa0c71bbc7ad7f70201b98168f0d64c6fa28a2d599c11731f14f38af23,PodSandboxId:d36bd45f06e2a8e9c804f77412cb48682ff1e8b6834d8f48d30b5764307404cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696245060176081238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-224116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6042e2dc777b7ecb1e5f00a006739c52,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fbfb7459,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d3bea97-2504-4e16-aadc-bf0545850e59 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6df97f9e4239       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   e74af2bc59298       storage-provisioner
	d855b81cd5224       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   c3c70a9d34409       busybox-5bc68d56bd-h45vs
	cb6914b812f42       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   2ec67d2362e70       coredns-5dd5756b68-h6gbq
	5f4a5c79d0951       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   a73bf9ef06365       kindnet-f7m28
	a05fc599ef7c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   e74af2bc59298       storage-provisioner
	679f86e7e306c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      3 minutes ago       Running             kube-proxy                1                   60fbe7053b5ce       kube-proxy-nshcj
	d5dc0540f04fa       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      3 minutes ago       Running             kube-scheduler            1                   0123ede69cbe7       kube-scheduler-multinode-224116
	c6df9444262f5       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      3 minutes ago       Running             kube-controller-manager   1                   59f1c5b070b58       kube-controller-manager-multinode-224116
	071747514d4fc       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      3 minutes ago       Running             kube-apiserver            1                   dc92a1a26fb6d       kube-apiserver-multinode-224116
	29426caa0c71b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   d36bd45f06e2a       etcd-multinode-224116
	
	* 
	* ==> coredns [cb6914b812f422c5b657c45d46c6b9b064814b12c3c701c290006230093f3147] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35986 - 47361 "HINFO IN 8406755216984999826.1324291592158815563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023743272s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-224116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-224116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=multinode-224116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_00_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:00:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-224116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:14:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:11:36 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:11:36 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:11:36 +0000   Mon, 02 Oct 2023 11:00:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:11:36 +0000   Mon, 02 Oct 2023 11:11:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-224116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 464a25fde409486a8499c1b4a9875d71
	  System UUID:                464a25fd-e409-486a-8499-c1b4a9875d71
	  Boot ID:                    8e86e713-0b2d-4161-ac44-3b4ae458c244
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-h45vs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-h6gbq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-224116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-f7m28                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-224116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-224116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-nshcj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-224116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-224116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-224116 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-224116 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-224116 event: Registered Node multinode-224116 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-224116 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-224116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-224116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-224116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-224116 event: Registered Node multinode-224116 in Controller
	
	
	Name:               multinode-224116-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-224116-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-224116-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:14:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:12:59 +0000   Mon, 02 Oct 2023 11:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:12:59 +0000   Mon, 02 Oct 2023 11:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:12:59 +0000   Mon, 02 Oct 2023 11:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:12:59 +0000   Mon, 02 Oct 2023 11:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    multinode-224116-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 9faff2bd4d0e490eb7c3fbd314099eb6
	  System UUID:                9faff2bd-4d0e-490e-b7c3-fbd314099eb6
	  Boot ID:                    62386d71-74e9-40ca-ab53-69ca8dd92473
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-klqbt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-crtcw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-rdt77            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-224116-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-224116-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-224116-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m50s                  kubelet     Node multinode-224116-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m15s (x2 over 3m15s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       109s                   kubelet     Node multinode-224116-m02 status is now: NodeNotSchedulable
	  Normal   NodeReady                109s (x2 over 13m)     kubelet     Node multinode-224116-m02 status is now: NodeReady
	  Normal   Starting                 108s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet     Node multinode-224116-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet     Node multinode-224116-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet     Node multinode-224116-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                   kubelet     Node multinode-224116-m02 status is now: NodeReady
	
	
	Name:               multinode-224116-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-224116-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:14:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-224116-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:14:42 +0000   Mon, 02 Oct 2023 11:14:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:14:42 +0000   Mon, 02 Oct 2023 11:14:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:14:42 +0000   Mon, 02 Oct 2023 11:14:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:14:42 +0000   Mon, 02 Oct 2023 11:14:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    multinode-224116-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9824884c34a479ead532158bfc09fe1
	  System UUID:                a9824884-c34a-479e-ad53-2158bfc09fe1
	  Boot ID:                    02262d16-dbce-4ecd-ad47-50e0d9808668
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nswcq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-z2ps6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-8tg2f            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-224116-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-224116-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             68s                kubelet     Node multinode-224116-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       7s                 kubelet     Node multinode-224116-m03 status is now: NodeNotSchedulable
	  Normal   NodeReady                7s (x2 over 11m)   kubelet     Node multinode-224116-m03 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    6s (x4 over 11m)   kubelet     Node multinode-224116-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6s (x4 over 11m)   kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientPID
	  Normal   NodeSchedulable          6s                 kubelet     Node multinode-224116-m03 status is now: NodeSchedulable
	  Normal   NodeHasSufficientMemory  6s (x4 over 11m)   kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-224116-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-224116-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-224116-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073239] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.340504] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.261081] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144598] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.541941] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.427807] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.099523] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.143935] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.106893] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.194216] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +16.870970] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [29426caa0c71bbc7ad7f70201b98168f0d64c6fa28a2d599c11731f14f38af23] <==
	* {"level":"info","ts":"2023-10-02T11:11:02.389014Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T11:11:02.389106Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-02T11:11:02.389295Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:11:02.389359Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:11:02.389385Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:11:02.389619Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2023-10-02T11:11:02.389656Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2023-10-02T11:11:02.390264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(18429775660708452854)"}
	{"level":"info","ts":"2023-10-02T11:11:02.390345Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","added-peer-id":"ffc3b7517aaad9f6","added-peer-peer-urls":["https://192.168.39.165:2380"]}
	{"level":"info","ts":"2023-10-02T11:11:02.390447Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:11:02.390499Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:11:04.065314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T11:11:04.065541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:11:04.065589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2023-10-02T11:11:04.06562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T11:11:04.065644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2023-10-02T11:11:04.065672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T11:11:04.065698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2023-10-02T11:11:04.067992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-224116 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:11:04.068066Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:11:04.069317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2023-10-02T11:11:04.068084Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:11:04.069417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:11:04.069935Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T11:11:04.07061Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:14:47 up 4 min,  0 users,  load average: 0.42, 0.37, 0.17
	Linux multinode-224116 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5f4a5c79d0951f473689a5ed81a24b8ef5550cee125c694fa4667a2fd34e5de5] <==
	* I1002 11:14:01.303706       1 main.go:250] Node multinode-224116-m03 has CIDR [10.244.3.0/24] 
	I1002 11:14:11.314772       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:14:11.314829       1 main.go:227] handling current node
	I1002 11:14:11.314851       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:14:11.314858       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	I1002 11:14:11.315129       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I1002 11:14:11.315296       1 main.go:250] Node multinode-224116-m03 has CIDR [10.244.3.0/24] 
	I1002 11:14:21.331477       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:14:21.331661       1 main.go:227] handling current node
	I1002 11:14:21.331675       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:14:21.331686       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	I1002 11:14:21.332063       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I1002 11:14:21.332151       1 main.go:250] Node multinode-224116-m03 has CIDR [10.244.3.0/24] 
	I1002 11:14:31.345419       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:14:31.345469       1 main.go:227] handling current node
	I1002 11:14:31.345480       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:14:31.345487       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	I1002 11:14:31.345591       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I1002 11:14:31.345624       1 main.go:250] Node multinode-224116-m03 has CIDR [10.244.3.0/24] 
	I1002 11:14:41.361172       1 main.go:223] Handling node with IPs: map[192.168.39.165:{}]
	I1002 11:14:41.361427       1 main.go:227] handling current node
	I1002 11:14:41.361462       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I1002 11:14:41.361541       1 main.go:250] Node multinode-224116-m02 has CIDR [10.244.1.0/24] 
	I1002 11:14:41.361902       1 main.go:223] Handling node with IPs: map[192.168.39.195:{}]
	I1002 11:14:41.362006       1 main.go:250] Node multinode-224116-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [071747514d4fcd871f3ce7fecf714054f18c1dc0f5ffbfebe4358af69561899d] <==
	* I1002 11:11:05.500929       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 11:11:05.501558       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 11:11:05.501670       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 11:11:05.565274       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1002 11:11:05.565313       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1002 11:11:05.634640       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 11:11:05.634874       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 11:11:05.634954       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 11:11:05.634977       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 11:11:05.666116       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 11:11:05.666748       1 aggregator.go:166] initial CRD sync complete...
	I1002 11:11:05.666757       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 11:11:05.666762       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 11:11:05.666767       1 cache.go:39] Caches are synced for autoregister controller
	I1002 11:11:05.667989       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 11:11:05.687975       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 11:11:05.693708       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 11:11:05.713118       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 11:11:06.507059       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 11:11:08.360504       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 11:11:08.496378       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 11:11:08.511881       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 11:11:08.579619       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:11:08.585733       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 11:11:54.961697       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [c6df9444262f553d16c8297c6474358692752f979a9714209c68110fe7e86005] <==
	* I1002 11:12:59.464581       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-224116-m02" podCIDRs=["10.244.1.0/24"]
	I1002 11:12:59.579426       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m03"
	I1002 11:13:00.237736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.679117ms"
	I1002 11:13:00.238168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="120.798µs"
	I1002 11:13:00.326008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.868µs"
	I1002 11:13:13.613070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="90.604µs"
	I1002 11:13:14.197624       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="110.421µs"
	I1002 11:13:14.202788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.417µs"
	I1002 11:13:39.785641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:14:38.771783       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-klqbt"
	I1002 11:14:38.780923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.713719ms"
	I1002 11:14:38.791571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.560493ms"
	I1002 11:14:38.813413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.768203ms"
	I1002 11:14:38.813512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.736µs"
	I1002 11:14:40.466976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.883361ms"
	I1002 11:14:40.467120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.349µs"
	I1002 11:14:40.920881       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:14:41.751877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.798µs"
	I1002 11:14:41.794260       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:14:42.645940       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:14:42.658461       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-224116-m03\" does not exist"
	I1002 11:14:42.659346       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-nswcq" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-nswcq"
	I1002 11:14:42.673284       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-224116-m03" podCIDRs=["10.244.2.0/24"]
	I1002 11:14:42.723110       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-224116-m02"
	I1002 11:14:43.539920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.312µs"
	
	* 
	* ==> kube-proxy [679f86e7e306cc7a7a7c3ea83ddb4860226f545ec7d53aa6cc7510b1864d1f9a] <==
	* I1002 11:11:07.491806       1 server_others.go:69] "Using iptables proxy"
	I1002 11:11:07.579255       1 node.go:141] Successfully retrieved node IP: 192.168.39.165
	I1002 11:11:07.782904       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:11:07.782953       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:11:07.785624       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:11:07.785660       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:11:07.785889       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:11:07.785898       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:11:07.786904       1 config.go:188] "Starting service config controller"
	I1002 11:11:07.786922       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:11:07.786940       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:11:07.786943       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:11:07.788461       1 config.go:315] "Starting node config controller"
	I1002 11:11:07.788470       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:11:07.889299       1 shared_informer.go:318] Caches are synced for node config
	I1002 11:11:07.911298       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:11:07.911330       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d5dc0540f04fa5e424d0e738e94fb099194bf0b6b1b6d14b318e1dcd4b63e4f9] <==
	* I1002 11:11:03.345819       1 serving.go:348] Generated self-signed cert in-memory
	W1002 11:11:05.600621       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 11:11:05.600838       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:11:05.601045       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 11:11:05.601074       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:11:05.663437       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 11:11:05.663486       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:11:05.666519       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 11:11:05.667625       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:11:05.669588       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 11:11:05.669730       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 11:11:05.768782       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:10:33 UTC, ends at Mon 2023-10-02 11:14:48 UTC. --
	Oct 02 11:11:07 multinode-224116 kubelet[914]: E1002 11:11:07.814856     914 projected.go:198] Error preparing data for projected volume kube-api-access-gc5nh for pod default/busybox-5bc68d56bd-h45vs: object "default"/"kube-root-ca.crt" not registered
	Oct 02 11:11:07 multinode-224116 kubelet[914]: E1002 11:11:07.814908     914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed1e56c2-6848-4905-995d-46cecedcabe7-kube-api-access-gc5nh podName:ed1e56c2-6848-4905-995d-46cecedcabe7 nodeName:}" failed. No retries permitted until 2023-10-02 11:11:09.814894525 +0000 UTC m=+10.865095791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gc5nh" (UniqueName: "kubernetes.io/projected/ed1e56c2-6848-4905-995d-46cecedcabe7-kube-api-access-gc5nh") pod "busybox-5bc68d56bd-h45vs" (UID: "ed1e56c2-6848-4905-995d-46cecedcabe7") : object "default"/"kube-root-ca.crt" not registered
	Oct 02 11:11:08 multinode-224116 kubelet[914]: E1002 11:11:08.200524     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-h45vs" podUID="ed1e56c2-6848-4905-995d-46cecedcabe7"
	Oct 02 11:11:08 multinode-224116 kubelet[914]: E1002 11:11:08.201032     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h6gbq" podUID="49ee2f4a-1c73-4642-bd3b-678e6cb9ef55"
	Oct 02 11:11:09 multinode-224116 kubelet[914]: E1002 11:11:09.730059     914 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 11:11:09 multinode-224116 kubelet[914]: E1002 11:11:09.730281     914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49ee2f4a-1c73-4642-bd3b-678e6cb9ef55-config-volume podName:49ee2f4a-1c73-4642-bd3b-678e6cb9ef55 nodeName:}" failed. No retries permitted until 2023-10-02 11:11:13.730263914 +0000 UTC m=+14.780465171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/49ee2f4a-1c73-4642-bd3b-678e6cb9ef55-config-volume") pod "coredns-5dd5756b68-h6gbq" (UID: "49ee2f4a-1c73-4642-bd3b-678e6cb9ef55") : object "kube-system"/"coredns" not registered
	Oct 02 11:11:09 multinode-224116 kubelet[914]: E1002 11:11:09.830976     914 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 02 11:11:09 multinode-224116 kubelet[914]: E1002 11:11:09.831034     914 projected.go:198] Error preparing data for projected volume kube-api-access-gc5nh for pod default/busybox-5bc68d56bd-h45vs: object "default"/"kube-root-ca.crt" not registered
	Oct 02 11:11:09 multinode-224116 kubelet[914]: E1002 11:11:09.831090     914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed1e56c2-6848-4905-995d-46cecedcabe7-kube-api-access-gc5nh podName:ed1e56c2-6848-4905-995d-46cecedcabe7 nodeName:}" failed. No retries permitted until 2023-10-02 11:11:13.831075354 +0000 UTC m=+14.881276622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gc5nh" (UniqueName: "kubernetes.io/projected/ed1e56c2-6848-4905-995d-46cecedcabe7-kube-api-access-gc5nh") pod "busybox-5bc68d56bd-h45vs" (UID: "ed1e56c2-6848-4905-995d-46cecedcabe7") : object "default"/"kube-root-ca.crt" not registered
	Oct 02 11:11:10 multinode-224116 kubelet[914]: E1002 11:11:10.200550     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-h45vs" podUID="ed1e56c2-6848-4905-995d-46cecedcabe7"
	Oct 02 11:11:10 multinode-224116 kubelet[914]: E1002 11:11:10.200735     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h6gbq" podUID="49ee2f4a-1c73-4642-bd3b-678e6cb9ef55"
	Oct 02 11:11:11 multinode-224116 kubelet[914]: I1002 11:11:11.210955     914 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 11:11:37 multinode-224116 kubelet[914]: I1002 11:11:37.391998     914 scope.go:117] "RemoveContainer" containerID="a05fc599ef7c763ee4beb1e0e158d5d53f8bbd600c0c425a2f12c7533ce7ed14"
	Oct 02 11:11:59 multinode-224116 kubelet[914]: E1002 11:11:59.221707     914 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 11:11:59 multinode-224116 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 11:11:59 multinode-224116 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 11:11:59 multinode-224116 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 11:12:59 multinode-224116 kubelet[914]: E1002 11:12:59.219326     914 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 11:12:59 multinode-224116 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 11:12:59 multinode-224116 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 11:12:59 multinode-224116 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 11:13:59 multinode-224116 kubelet[914]: E1002 11:13:59.225864     914 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 11:13:59 multinode-224116 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 11:13:59 multinode-224116 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 11:13:59 multinode-224116 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-224116 -n multinode-224116
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-224116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (686.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224116 stop: exit status 82 (2m1.717938241s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-224116"  ...
	* Stopping node "multinode-224116"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-224116 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status
E1002 11:16:55.305416  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224116 status: exit status 3 (18.724769957s)

                                                
                                                
-- stdout --
	multinode-224116
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-224116-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:17:11.190671  358165 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	E1002 11:17:11.190719  358165 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-224116 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-224116 -n multinode-224116
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-224116 -n multinode-224116: exit status 3 (3.155048886s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:17:14.518806  358281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	E1002 11:17:14.518837  358281 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-224116" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.60s)

                                                
                                    
x
+
TestPreload (280.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-331046 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1002 11:26:55.306239  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:27:17.707779  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-331046 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.287200983s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-331046 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-331046 image pull gcr.io/k8s-minikube/busybox: (2.756556593s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-331046
E1002 11:29:04.538988  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:29:14.659896  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-331046: exit status 82 (2m1.368623359s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-331046"  ...
	* Stopping node "test-preload-331046"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-331046 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-10-02 11:29:52.572110271 +0000 UTC m=+3243.143796313
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-331046 -n test-preload-331046
E1002 11:29:58.358672  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-331046 -n test-preload-331046: exit status 3 (18.654135478s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:30:11.222848  361728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	E1002 11:30:11.222872  361728 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-331046" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-331046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-331046
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-331046: (1.133774083s)
--- FAIL: TestPreload (280.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (144.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.518286104.exe start -p running-upgrade-703246 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.518286104.exe start -p running-upgrade-703246 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.845687447s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-703246 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-703246 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.289316153s)

                                                
                                                
-- stdout --
	* [running-upgrade-703246] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-703246 in cluster running-upgrade-703246
	* Updating the running kvm2 "running-upgrade-703246" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:37:32.058601  368657 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:37:32.058887  368657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:37:32.058898  368657 out.go:309] Setting ErrFile to fd 2...
	I1002 11:37:32.058905  368657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:37:32.059115  368657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:37:32.059752  368657 out.go:303] Setting JSON to false
	I1002 11:37:32.060817  368657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8398,"bootTime":1696238254,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:37:32.060908  368657 start.go:138] virtualization: kvm guest
	I1002 11:37:32.063394  368657 out.go:177] * [running-upgrade-703246] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:37:32.065037  368657 notify.go:220] Checking for updates...
	I1002 11:37:32.065057  368657 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:37:32.066481  368657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:37:32.068124  368657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:37:32.069791  368657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:37:32.071393  368657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:37:32.072895  368657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:37:32.074621  368657 config.go:182] Loaded profile config "running-upgrade-703246": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:37:32.074638  368657 start_flags.go:686] config upgrade: Driver=kvm2
	I1002 11:37:32.074648  368657 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 11:37:32.074728  368657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/running-upgrade-703246/config.json ...
	I1002 11:37:32.075348  368657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:37:32.075421  368657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:37:32.091147  368657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I1002 11:37:32.091564  368657 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:37:32.092081  368657 main.go:141] libmachine: Using API Version  1
	I1002 11:37:32.092108  368657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:37:32.092522  368657 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:37:32.092735  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:32.094910  368657 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 11:37:32.096269  368657 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:37:32.096694  368657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:37:32.096739  368657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:37:32.112391  368657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I1002 11:37:32.112958  368657 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:37:32.113570  368657 main.go:141] libmachine: Using API Version  1
	I1002 11:37:32.113608  368657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:37:32.114043  368657 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:37:32.114254  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:32.152408  368657 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:37:32.153986  368657 start.go:298] selected driver: kvm2
	I1002 11:37:32.154010  368657 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-703246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.67 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 11:37:32.154102  368657 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:37:32.154977  368657 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.155060  368657 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:37:32.171217  368657 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:37:32.171564  368657 cni.go:84] Creating CNI manager for ""
	I1002 11:37:32.171585  368657 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1002 11:37:32.171594  368657 start_flags.go:321] config:
	{Name:running-upgrade-703246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.67 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 11:37:32.171760  368657 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.173776  368657 out.go:177] * Starting control plane node running-upgrade-703246 in cluster running-upgrade-703246
	I1002 11:37:32.175189  368657 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1002 11:37:32.630112  368657 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1002 11:37:32.630313  368657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/running-upgrade-703246/config.json ...
	I1002 11:37:32.630478  368657 cache.go:107] acquiring lock: {Name:mk0615fc7d3af16cee9624322e71fde1879911f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630501  368657 cache.go:107] acquiring lock: {Name:mk4f88a55d2d37c72a1fa0fa93be041c537dce50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630518  368657 cache.go:107] acquiring lock: {Name:mk2a2053e82afc00e52a6528d654a785d26d8602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630553  368657 cache.go:107] acquiring lock: {Name:mk2359b0063ac4487adb3031a590c80d77dfb229 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630597  368657 cache.go:107] acquiring lock: {Name:mk39f37a603628f5ccb0ca1c565b805f3ac3002d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630621  368657 cache.go:107] acquiring lock: {Name:mka86c2df65fb7eb437c67ab2049dc1a7abb0ac2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630659  368657 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 11:37:32.630666  368657 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 11:37:32.630674  368657 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 212.29µs
	I1002 11:37:32.630686  368657 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 11:37:32.630688  368657 start.go:365] acquiring machines lock for running-upgrade-703246: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:37:32.630700  368657 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1002 11:37:32.630666  368657 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:37:32.630731  368657 start.go:369] acquired machines lock for "running-upgrade-703246" in 27.423µs
	I1002 11:37:32.630741  368657 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1002 11:37:32.630746  368657 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:37:32.630757  368657 fix.go:54] fixHost starting: minikube
	I1002 11:37:32.630740  368657 cache.go:107] acquiring lock: {Name:mk0f2c9b90fc334ca3760ea399c4a1eaa9cca21a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630824  368657 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1002 11:37:32.630569  368657 cache.go:107] acquiring lock: {Name:mk681e8ae3b01150602a25bbc008d9da6a3f90eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:37:32.630726  368657 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1002 11:37:32.630917  368657 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 11:37:32.631202  368657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:37:32.631272  368657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:37:32.632080  368657 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1002 11:37:32.632097  368657 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 11:37:32.632135  368657 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1002 11:37:32.632086  368657 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:37:32.632209  368657 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1002 11:37:32.632086  368657 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1002 11:37:32.632297  368657 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 11:37:32.649255  368657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1002 11:37:32.649703  368657 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:37:32.650183  368657 main.go:141] libmachine: Using API Version  1
	I1002 11:37:32.650206  368657 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:37:32.650576  368657 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:37:32.650809  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:32.650981  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetState
	I1002 11:37:32.652696  368657 fix.go:102] recreateIfNeeded on running-upgrade-703246: state=Running err=<nil>
	W1002 11:37:32.652735  368657 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:37:32.655273  368657 out.go:177] * Updating the running kvm2 "running-upgrade-703246" VM ...
	I1002 11:37:32.656686  368657 machine.go:88] provisioning docker machine ...
	I1002 11:37:32.656722  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:32.657018  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetMachineName
	I1002 11:37:32.657237  368657 buildroot.go:166] provisioning hostname "running-upgrade-703246"
	I1002 11:37:32.657259  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetMachineName
	I1002 11:37:32.657443  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:32.660635  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.661074  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:32.661103  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.661228  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:32.661428  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:32.661598  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:32.661785  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:32.661964  368657 main.go:141] libmachine: Using SSH client type: native
	I1002 11:37:32.662488  368657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1002 11:37:32.662514  368657 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-703246 && echo "running-upgrade-703246" | sudo tee /etc/hostname
	I1002 11:37:32.765406  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:37:32.783135  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1002 11:37:32.787845  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1002 11:37:32.788784  368657 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-703246
	
	I1002 11:37:32.788810  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:32.791952  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.792356  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:32.792399  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.792597  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:32.792813  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:32.792993  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:32.793147  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:32.793315  368657 main.go:141] libmachine: Using SSH client type: native
	I1002 11:37:32.793692  368657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1002 11:37:32.793712  368657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-703246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-703246/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-703246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:37:32.818686  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1002 11:37:32.834691  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1002 11:37:32.834721  368657 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 204.218591ms
	I1002 11:37:32.834735  368657 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1002 11:37:32.848327  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1002 11:37:32.858318  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1002 11:37:32.903158  368657 cache.go:162] opening:  /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1002 11:37:32.908046  368657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:37:32.908084  368657 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:37:32.908132  368657 buildroot.go:174] setting up certificates
	I1002 11:37:32.908147  368657 provision.go:83] configureAuth start
	I1002 11:37:32.908163  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetMachineName
	I1002 11:37:32.908493  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetIP
	I1002 11:37:32.911922  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.912351  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:32.912386  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.912713  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:32.915585  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.915987  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:32.916019  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:32.916273  368657 provision.go:138] copyHostCerts
	I1002 11:37:32.916334  368657 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:37:32.916347  368657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:37:32.916419  368657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:37:32.916569  368657 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:37:32.916582  368657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:37:32.916619  368657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:37:32.916713  368657 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:37:32.916723  368657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:37:32.916753  368657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:37:32.916842  368657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-703246 san=[192.168.39.67 192.168.39.67 localhost 127.0.0.1 minikube running-upgrade-703246]
	I1002 11:37:33.069393  368657 provision.go:172] copyRemoteCerts
	I1002 11:37:33.069500  368657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:37:33.069553  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:33.072893  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.073426  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:33.073466  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.073716  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:33.073947  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.074161  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:33.074349  368657 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/running-upgrade-703246/id_rsa Username:docker}
	I1002 11:37:33.159516  368657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:37:33.178334  368657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:37:33.197297  368657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:37:33.221205  368657 provision.go:86] duration metric: configureAuth took 313.023798ms
	I1002 11:37:33.221247  368657 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:37:33.221469  368657 config.go:182] Loaded profile config "running-upgrade-703246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:37:33.221574  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:33.225573  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.225968  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:33.226013  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.226137  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:33.226547  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.226795  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.226976  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:33.227173  368657 main.go:141] libmachine: Using SSH client type: native
	I1002 11:37:33.227590  368657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1002 11:37:33.227613  368657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:37:33.494407  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1002 11:37:33.494447  368657 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 863.922542ms
	I1002 11:37:33.494467  368657 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1002 11:37:33.848977  368657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:37:33.849005  368657 machine.go:91] provisioned docker machine in 1.192296166s
	I1002 11:37:33.849018  368657 start.go:300] post-start starting for "running-upgrade-703246" (driver="kvm2")
	I1002 11:37:33.849035  368657 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:37:33.849059  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:33.850515  368657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:37:33.850552  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:33.854491  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.854523  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:33.854544  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.854547  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:33.854808  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.855137  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:33.855283  368657 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/running-upgrade-703246/id_rsa Username:docker}
	I1002 11:37:33.947411  368657 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:37:33.952983  368657 info.go:137] Remote host: Buildroot 2019.02.7
	I1002 11:37:33.953059  368657 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:37:33.953159  368657 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:37:33.953274  368657 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:37:33.953392  368657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:37:33.965271  368657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:37:33.989083  368657 start.go:303] post-start completed in 140.041574ms
	I1002 11:37:33.989109  368657 fix.go:56] fixHost completed within 1.358351026s
	I1002 11:37:33.989135  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:33.992757  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.993615  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:33.993623  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:33.993645  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:33.994016  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.994229  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:33.994395  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:33.994605  368657 main.go:141] libmachine: Using SSH client type: native
	I1002 11:37:33.995069  368657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I1002 11:37:33.995090  368657 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 11:37:33.998893  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1002 11:37:33.998921  368657 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.368181889s
	I1002 11:37:33.998940  368657 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1002 11:37:34.131305  368657 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696246654.126070118
	
	I1002 11:37:34.131329  368657 fix.go:206] guest clock: 1696246654.126070118
	I1002 11:37:34.131338  368657 fix.go:219] Guest: 2023-10-02 11:37:34.126070118 +0000 UTC Remote: 2023-10-02 11:37:33.989113196 +0000 UTC m=+1.967125077 (delta=136.956922ms)
	I1002 11:37:34.131362  368657 fix.go:190] guest clock delta is within tolerance: 136.956922ms
	I1002 11:37:34.131368  368657 start.go:83] releasing machines lock for "running-upgrade-703246", held for 1.500629892s
	I1002 11:37:34.131391  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:34.135627  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetIP
	I1002 11:37:34.138694  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.139094  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:34.139120  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.139400  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:34.142951  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:34.143207  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .DriverName
	I1002 11:37:34.143299  368657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:37:34.143350  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:34.143438  368657 ssh_runner.go:195] Run: cat /version.json
	I1002 11:37:34.143453  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHHostname
	I1002 11:37:34.147449  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.147942  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:34.147973  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.148171  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.148728  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:34.148735  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:d5:d1", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:35:52 +0000 UTC Type:0 Mac:52:54:00:10:d5:d1 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:running-upgrade-703246 Clientid:01:52:54:00:10:d5:d1}
	I1002 11:37:34.148762  368657 main.go:141] libmachine: (running-upgrade-703246) DBG | domain running-upgrade-703246 has defined IP address 192.168.39.67 and MAC address 52:54:00:10:d5:d1 in network minikube-net
	I1002 11:37:34.148952  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:34.149034  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHPort
	I1002 11:37:34.149295  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHKeyPath
	I1002 11:37:34.149305  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:34.149498  368657 main.go:141] libmachine: (running-upgrade-703246) Calling .GetSSHUsername
	I1002 11:37:34.149567  368657 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/running-upgrade-703246/id_rsa Username:docker}
	I1002 11:37:34.149667  368657 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/running-upgrade-703246/id_rsa Username:docker}
	I1002 11:37:34.227292  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1002 11:37:34.227316  368657 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.596749532s
	I1002 11:37:34.227332  368657 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1002 11:37:34.230758  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1002 11:37:34.230788  368657 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.600192307s
	I1002 11:37:34.230832  368657 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	W1002 11:37:34.240924  368657 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 11:37:34.631296  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1002 11:37:34.631318  368657 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.000840301s
	I1002 11:37:34.631331  368657 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1002 11:37:34.697551  368657 cache.go:157] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1002 11:37:34.697593  368657 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.067007626s
	I1002 11:37:34.697621  368657 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1002 11:37:34.697640  368657 cache.go:87] Successfully saved all images to host disk.
	I1002 11:37:34.697698  368657 ssh_runner.go:195] Run: systemctl --version
	I1002 11:37:34.703111  368657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:37:34.788788  368657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:37:34.795717  368657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:37:34.795795  368657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:37:34.801573  368657 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 11:37:34.801600  368657 start.go:469] detecting cgroup driver to use...
	I1002 11:37:34.801661  368657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:37:34.813809  368657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:37:34.823387  368657 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:37:34.823451  368657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:37:34.832000  368657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:37:34.841252  368657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 11:37:34.849337  368657 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 11:37:34.849400  368657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:37:34.966100  368657 docker.go:213] disabling docker service ...
	I1002 11:37:34.966176  368657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:37:35.988055  368657 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.021848756s)
	I1002 11:37:35.988134  368657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:37:36.000689  368657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:37:36.102702  368657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:37:36.254628  368657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:37:36.272649  368657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:37:36.286528  368657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:37:36.286582  368657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:37:36.295311  368657 out.go:177] 
	W1002 11:37:36.296503  368657 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 11:37:36.296528  368657 out.go:239] * 
	* 
	W1002 11:37:36.297766  368657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:37:36.300350  368657 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-703246 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-02 11:37:36.326135498 +0000 UTC m=+3706.897821549
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-703246 -n running-upgrade-703246
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-703246 -n running-upgrade-703246: exit status 4 (255.126454ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:37:36.549241  368759 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-703246" does not appear in /home/jenkins/minikube-integration/17340-332611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-703246" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-703246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-703246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-703246: (2.146963457s)
--- FAIL: TestRunningBinaryUpgrade (144.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083017 --driver=kvm2  --container-runtime=crio
E1002 11:36:55.305334  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-083017 --driver=kvm2  --container-runtime=crio: signal: killed (44.662984529s)

                                                
                                                
-- stdout --
	* [NoKubernetes-083017] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-083017
	* Restarting existing kvm2 VM for "NoKubernetes-083017" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-083017 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-083017 -n NoKubernetes-083017
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-083017 -n NoKubernetes-083017: exit status 6 (229.751631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:37:10.287218  368355 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-083017" does not appear in /home/jenkins/minikube-integration/17340-332611/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-083017" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (44.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (286.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3331314701.exe start -p stopped-upgrade-204505 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3331314701.exe start -p stopped-upgrade-204505 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.438667437s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3331314701.exe -p stopped-upgrade-204505 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3331314701.exe -p stopped-upgrade-204505 stop: (1m32.773959238s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-204505 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-204505 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m0.302385834s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-204505] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-204505 in cluster stopped-upgrade-204505
	* Restarting existing kvm2 VM for "stopped-upgrade-204505" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:40:59.436872  371046 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:40:59.437074  371046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:59.437085  371046 out.go:309] Setting ErrFile to fd 2...
	I1002 11:40:59.437092  371046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:59.437412  371046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:40:59.438189  371046 out.go:303] Setting JSON to false
	I1002 11:40:59.439701  371046 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8606,"bootTime":1696238254,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:40:59.439787  371046 start.go:138] virtualization: kvm guest
	I1002 11:40:59.442446  371046 out.go:177] * [stopped-upgrade-204505] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:40:59.444127  371046 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:40:59.444186  371046 notify.go:220] Checking for updates...
	I1002 11:40:59.445893  371046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:40:59.447477  371046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:40:59.449034  371046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:59.450211  371046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:40:59.451738  371046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:40:59.453725  371046 config.go:182] Loaded profile config "stopped-upgrade-204505": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:40:59.453778  371046 start_flags.go:686] config upgrade: Driver=kvm2
	I1002 11:40:59.453802  371046 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 11:40:59.453982  371046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/stopped-upgrade-204505/config.json ...
	I1002 11:40:59.454826  371046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:40:59.454909  371046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:40:59.484178  371046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1002 11:40:59.486613  371046 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:40:59.491113  371046 main.go:141] libmachine: Using API Version  1
	I1002 11:40:59.491140  371046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:40:59.491650  371046 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:40:59.491848  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:40:59.494433  371046 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 11:40:59.496048  371046 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:40:59.496472  371046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:40:59.496519  371046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:40:59.516931  371046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I1002 11:40:59.517440  371046 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:40:59.517910  371046 main.go:141] libmachine: Using API Version  1
	I1002 11:40:59.517926  371046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:40:59.518278  371046 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:40:59.518442  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:40:59.566552  371046 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:40:59.568111  371046 start.go:298] selected driver: kvm2
	I1002 11:40:59.568127  371046 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-204505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 11:40:59.568275  371046 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:40:59.569214  371046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:59.569309  371046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:40:59.590293  371046 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:40:59.590765  371046 cni.go:84] Creating CNI manager for ""
	I1002 11:40:59.590792  371046 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1002 11:40:59.590801  371046 start_flags.go:321] config:
	{Name:stopped-upgrade-204505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.39.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 11:40:59.591026  371046 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:59.594007  371046 out.go:177] * Starting control plane node stopped-upgrade-204505 in cluster stopped-upgrade-204505
	I1002 11:40:59.595926  371046 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1002 11:41:00.043632  371046 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1002 11:41:00.043822  371046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/stopped-upgrade-204505/config.json ...
	I1002 11:41:00.044159  371046 start.go:365] acquiring machines lock for stopped-upgrade-204505: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:41:00.044386  371046 cache.go:107] acquiring lock: {Name:mk0615fc7d3af16cee9624322e71fde1879911f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044410  371046 cache.go:107] acquiring lock: {Name:mka86c2df65fb7eb437c67ab2049dc1a7abb0ac2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044410  371046 cache.go:107] acquiring lock: {Name:mk39f37a603628f5ccb0ca1c565b805f3ac3002d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044450  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 11:41:00.044460  371046 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.887µs
	I1002 11:41:00.044471  371046 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 11:41:00.044476  371046 cache.go:107] acquiring lock: {Name:mk681e8ae3b01150602a25bbc008d9da6a3f90eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044484  371046 cache.go:107] acquiring lock: {Name:mk2a2053e82afc00e52a6528d654a785d26d8602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044510  371046 cache.go:107] acquiring lock: {Name:mk4f88a55d2d37c72a1fa0fa93be041c537dce50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044530  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1002 11:41:00.044542  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1002 11:41:00.044541  371046 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 56.729µs
	I1002 11:41:00.044548  371046 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 40.215µs
	I1002 11:41:00.044558  371046 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1002 11:41:00.044552  371046 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1002 11:41:00.044574  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1002 11:41:00.044571  371046 cache.go:107] acquiring lock: {Name:mk0f2c9b90fc334ca3760ea399c4a1eaa9cca21a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044573  371046 cache.go:107] acquiring lock: {Name:mk2359b0063ac4487adb3031a590c80d77dfb229 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:41:00.044588  371046 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 111.846µs
	I1002 11:41:00.044600  371046 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1002 11:41:00.044608  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1002 11:41:00.044612  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1002 11:41:00.044615  371046 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 47.34µs
	I1002 11:41:00.044619  371046 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 47.253µs
	I1002 11:41:00.044623  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1002 11:41:00.044630  371046 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1002 11:41:00.044634  371046 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 243.2µs
	I1002 11:41:00.044638  371046 cache.go:115] /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1002 11:41:00.044643  371046 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1002 11:41:00.044627  371046 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1002 11:41:00.044647  371046 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 250.477µs
	I1002 11:41:00.044657  371046 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1002 11:41:00.044665  371046 cache.go:87] Successfully saved all images to host disk.
	I1002 11:41:15.787798  371046 start.go:369] acquired machines lock for "stopped-upgrade-204505" in 15.743597587s
	I1002 11:41:15.787852  371046 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:41:15.787868  371046 fix.go:54] fixHost starting: minikube
	I1002 11:41:15.788255  371046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:41:15.788309  371046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:41:15.808217  371046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I1002 11:41:15.808637  371046 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:41:15.809204  371046 main.go:141] libmachine: Using API Version  1
	I1002 11:41:15.809237  371046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:41:15.809618  371046 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:41:15.809832  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:15.810022  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetState
	I1002 11:41:15.811820  371046 fix.go:102] recreateIfNeeded on stopped-upgrade-204505: state=Stopped err=<nil>
	I1002 11:41:15.811844  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	W1002 11:41:15.812033  371046 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:41:15.813997  371046 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-204505" ...
	I1002 11:41:15.815593  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .Start
	I1002 11:41:15.815796  371046 main.go:141] libmachine: (stopped-upgrade-204505) Ensuring networks are active...
	I1002 11:41:15.817115  371046 main.go:141] libmachine: (stopped-upgrade-204505) Ensuring network default is active
	I1002 11:41:15.817133  371046 main.go:141] libmachine: (stopped-upgrade-204505) Ensuring network minikube-net is active
	I1002 11:41:15.817502  371046 main.go:141] libmachine: (stopped-upgrade-204505) Getting domain xml...
	I1002 11:41:15.818164  371046 main.go:141] libmachine: (stopped-upgrade-204505) Creating domain...
	I1002 11:41:17.383047  371046 main.go:141] libmachine: (stopped-upgrade-204505) Waiting to get IP...
	I1002 11:41:17.386452  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:17.386485  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:17.386504  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:17.384319  371578 retry.go:31] will retry after 264.119806ms: waiting for machine to come up
	I1002 11:41:17.651653  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:17.652081  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:17.652115  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:17.652059  371578 retry.go:31] will retry after 291.183512ms: waiting for machine to come up
	I1002 11:41:17.944553  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:17.945208  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:17.945241  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:17.945125  371578 retry.go:31] will retry after 388.221606ms: waiting for machine to come up
	I1002 11:41:18.334818  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:18.335486  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:18.335512  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:18.335393  371578 retry.go:31] will retry after 489.635091ms: waiting for machine to come up
	I1002 11:41:18.833004  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:18.834068  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:18.834100  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:18.833979  371578 retry.go:31] will retry after 561.940663ms: waiting for machine to come up
	I1002 11:41:19.399495  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:19.400212  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:19.400247  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:19.400123  371578 retry.go:31] will retry after 933.293396ms: waiting for machine to come up
	I1002 11:41:20.335068  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:20.335732  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:20.335760  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:20.335638  371578 retry.go:31] will retry after 759.152714ms: waiting for machine to come up
	I1002 11:41:21.096983  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:21.097506  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:21.097538  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:21.097488  371578 retry.go:31] will retry after 1.318721541s: waiting for machine to come up
	I1002 11:41:22.418051  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:22.418670  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:22.418701  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:22.418619  371578 retry.go:31] will retry after 1.622318631s: waiting for machine to come up
	I1002 11:41:24.043335  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:24.043946  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:24.043980  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:24.043890  371578 retry.go:31] will retry after 1.538677052s: waiting for machine to come up
	I1002 11:41:25.584417  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:25.584956  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:25.584990  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:25.584900  371578 retry.go:31] will retry after 2.893618552s: waiting for machine to come up
	I1002 11:41:28.480754  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:28.481370  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:28.481394  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:28.481331  371578 retry.go:31] will retry after 3.317833171s: waiting for machine to come up
	I1002 11:41:31.800405  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:31.800933  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:31.800964  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:31.800877  371578 retry.go:31] will retry after 4.163880624s: waiting for machine to come up
	I1002 11:41:35.966663  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:35.967064  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:35.967093  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:35.967025  371578 retry.go:31] will retry after 3.915433774s: waiting for machine to come up
	I1002 11:41:39.884909  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:39.885474  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:39.885515  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:39.885403  371578 retry.go:31] will retry after 5.864402297s: waiting for machine to come up
	I1002 11:41:45.755464  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:45.755893  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | unable to find current IP address of domain stopped-upgrade-204505 in network minikube-net
	I1002 11:41:45.755927  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | I1002 11:41:45.755853  371578 retry.go:31] will retry after 6.886974856s: waiting for machine to come up
	I1002 11:41:52.645008  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.645581  371046 main.go:141] libmachine: (stopped-upgrade-204505) Found IP for machine: 192.168.39.11
	I1002 11:41:52.645614  371046 main.go:141] libmachine: (stopped-upgrade-204505) Reserving static IP address...
	I1002 11:41:52.645634  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has current primary IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.646068  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "stopped-upgrade-204505", mac: "52:54:00:1b:26:88", ip: "192.168.39.11"} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:52.646109  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-204505", mac: "52:54:00:1b:26:88", ip: "192.168.39.11"}
	I1002 11:41:52.646127  371046 main.go:141] libmachine: (stopped-upgrade-204505) Reserved static IP address: 192.168.39.11
	I1002 11:41:52.646144  371046 main.go:141] libmachine: (stopped-upgrade-204505) Waiting for SSH to be available...
	I1002 11:41:52.646163  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | Getting to WaitForSSH function...
	I1002 11:41:52.648199  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.648561  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:52.648598  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.648690  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | Using SSH client type: external
	I1002 11:41:52.648723  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa (-rw-------)
	I1002 11:41:52.648763  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:41:52.648783  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | About to run SSH command:
	I1002 11:41:52.648806  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | exit 0
	I1002 11:41:52.778272  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | SSH cmd err, output: <nil>: 
	I1002 11:41:52.778663  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetConfigRaw
	I1002 11:41:52.779419  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetIP
	I1002 11:41:52.782412  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.782886  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:52.782922  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.783180  371046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/stopped-upgrade-204505/config.json ...
	I1002 11:41:52.783432  371046 machine.go:88] provisioning docker machine ...
	I1002 11:41:52.783469  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:52.783675  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetMachineName
	I1002 11:41:52.783858  371046 buildroot.go:166] provisioning hostname "stopped-upgrade-204505"
	I1002 11:41:52.783880  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetMachineName
	I1002 11:41:52.784006  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:52.786391  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.786722  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:52.786750  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.786873  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:52.787086  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:52.787270  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:52.787405  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:52.787563  371046 main.go:141] libmachine: Using SSH client type: native
	I1002 11:41:52.787912  371046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1002 11:41:52.787928  371046 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-204505 && echo "stopped-upgrade-204505" | sudo tee /etc/hostname
	I1002 11:41:52.905760  371046 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-204505
	
	I1002 11:41:52.905793  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:52.908350  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.908733  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:52.908765  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:52.908840  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:52.909057  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:52.909252  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:52.909418  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:52.909590  371046 main.go:141] libmachine: Using SSH client type: native
	I1002 11:41:52.909917  371046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1002 11:41:52.909936  371046 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-204505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-204505/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-204505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:41:53.027167  371046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:41:53.027197  371046 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:41:53.027238  371046 buildroot.go:174] setting up certificates
	I1002 11:41:53.027248  371046 provision.go:83] configureAuth start
	I1002 11:41:53.027261  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetMachineName
	I1002 11:41:53.027618  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetIP
	I1002 11:41:53.030188  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.030578  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:53.030607  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.030797  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:53.033238  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.033607  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:53.033640  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.033780  371046 provision.go:138] copyHostCerts
	I1002 11:41:53.033874  371046 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:41:53.033888  371046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:41:53.033972  371046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:41:53.034160  371046 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:41:53.034173  371046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:41:53.034216  371046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:41:53.034291  371046 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:41:53.034301  371046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:41:53.034333  371046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:41:53.034415  371046 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-204505 san=[192.168.39.11 192.168.39.11 localhost 127.0.0.1 minikube stopped-upgrade-204505]
	I1002 11:41:53.145199  371046 provision.go:172] copyRemoteCerts
	I1002 11:41:53.145275  371046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:41:53.145310  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:53.148366  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.148793  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:53.148831  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.149004  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:53.149246  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:53.149460  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:53.149667  371046 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa Username:docker}
	I1002 11:41:53.233500  371046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:41:53.247468  371046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:41:53.260804  371046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:41:53.274535  371046 provision.go:86] duration metric: configureAuth took 247.262616ms
	I1002 11:41:53.274560  371046 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:41:53.274729  371046 config.go:182] Loaded profile config "stopped-upgrade-204505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:41:53.274802  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:53.277169  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.277559  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:53.277596  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:53.277772  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:53.277964  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:53.278173  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:53.278392  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:53.278576  371046 main.go:141] libmachine: Using SSH client type: native
	I1002 11:41:53.278883  371046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1002 11:41:53.278900  371046 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:41:58.695060  371046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:41:58.695093  371046 machine.go:91] provisioned docker machine in 5.911642791s
	I1002 11:41:58.695105  371046 start.go:300] post-start starting for "stopped-upgrade-204505" (driver="kvm2")
	I1002 11:41:58.695120  371046 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:41:58.695148  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:58.695507  371046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:41:58.695548  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:58.698304  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.698881  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:58.698922  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.699056  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:58.699243  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:58.699375  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:58.699529  371046 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa Username:docker}
	I1002 11:41:58.781446  371046 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:41:58.786914  371046 info.go:137] Remote host: Buildroot 2019.02.7
	I1002 11:41:58.786940  371046 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:41:58.787016  371046 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:41:58.787117  371046 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:41:58.787239  371046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:41:58.793613  371046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:41:58.817708  371046 start.go:303] post-start completed in 122.583479ms
	I1002 11:41:58.817736  371046 fix.go:56] fixHost completed within 43.029872524s
	I1002 11:41:58.817757  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:58.820743  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.826841  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:58.826883  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.827050  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:58.827257  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:58.827437  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:58.827615  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:58.827774  371046 main.go:141] libmachine: Using SSH client type: native
	I1002 11:41:58.828240  371046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1002 11:41:58.828261  371046 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 11:41:58.955531  371046 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696246918.884427777
	
	I1002 11:41:58.955560  371046 fix.go:206] guest clock: 1696246918.884427777
	I1002 11:41:58.955571  371046 fix.go:219] Guest: 2023-10-02 11:41:58.884427777 +0000 UTC Remote: 2023-10-02 11:41:58.817740045 +0000 UTC m=+59.432915517 (delta=66.687732ms)
	I1002 11:41:58.955638  371046 fix.go:190] guest clock delta is within tolerance: 66.687732ms
	I1002 11:41:58.955644  371046 start.go:83] releasing machines lock for "stopped-upgrade-204505", held for 43.167815309s
	I1002 11:41:58.955748  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:58.956290  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetIP
	I1002 11:41:58.960654  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.960981  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:58.961015  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.961216  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:58.961810  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:58.961991  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .DriverName
	I1002 11:41:58.962111  371046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:41:58.962164  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:58.962251  371046 ssh_runner.go:195] Run: cat /version.json
	I1002 11:41:58.962283  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHHostname
	I1002 11:41:58.965045  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.965225  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.965450  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:58.965495  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.965612  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:58.965670  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:26:88", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2023-10-02 12:41:44 +0000 UTC Type:0 Mac:52:54:00:1b:26:88 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:stopped-upgrade-204505 Clientid:01:52:54:00:1b:26:88}
	I1002 11:41:58.965701  371046 main.go:141] libmachine: (stopped-upgrade-204505) DBG | domain stopped-upgrade-204505 has defined IP address 192.168.39.11 and MAC address 52:54:00:1b:26:88 in network minikube-net
	I1002 11:41:58.965789  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:58.965851  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHPort
	I1002 11:41:58.965970  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:58.965999  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHKeyPath
	I1002 11:41:58.966116  371046 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa Username:docker}
	I1002 11:41:58.966173  371046 main.go:141] libmachine: (stopped-upgrade-204505) Calling .GetSSHUsername
	I1002 11:41:58.966312  371046 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/stopped-upgrade-204505/id_rsa Username:docker}
	W1002 11:41:59.051597  371046 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 11:41:59.051705  371046 ssh_runner.go:195] Run: systemctl --version
	I1002 11:41:59.073993  371046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:41:59.166029  371046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:41:59.174319  371046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:41:59.174434  371046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:41:59.180889  371046 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 11:41:59.180917  371046 start.go:469] detecting cgroup driver to use...
	I1002 11:41:59.180984  371046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:41:59.193657  371046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:41:59.204816  371046 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:41:59.204884  371046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:41:59.215025  371046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:41:59.228131  371046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 11:41:59.239422  371046 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 11:41:59.239504  371046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:41:59.354209  371046 docker.go:213] disabling docker service ...
	I1002 11:41:59.354284  371046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:41:59.369585  371046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:41:59.380042  371046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:41:59.487706  371046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:41:59.635199  371046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:41:59.646901  371046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:41:59.662066  371046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:41:59.662125  371046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:41:59.672645  371046 out.go:177] 
	W1002 11:41:59.674211  371046 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 11:41:59.674237  371046 out.go:239] * 
	* 
	W1002 11:41:59.675483  371046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:41:59.677152  371046 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-204505 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (286.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-892275 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-892275 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.524072169s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-892275] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-892275 in cluster pause-892275
	* Updating the running kvm2 "pause-892275" VM ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-892275" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:39:45.793780  370050 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:39:45.794002  370050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:45.794017  370050 out.go:309] Setting ErrFile to fd 2...
	I1002 11:39:45.794024  370050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:39:45.794233  370050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:39:45.794850  370050 out.go:303] Setting JSON to false
	I1002 11:39:45.795850  370050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8532,"bootTime":1696238254,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:39:45.795911  370050 start.go:138] virtualization: kvm guest
	I1002 11:39:45.798115  370050 out.go:177] * [pause-892275] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:39:45.800189  370050 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:39:45.800104  370050 notify.go:220] Checking for updates...
	I1002 11:39:45.801857  370050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:39:45.803732  370050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:39:45.805527  370050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:39:45.807081  370050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:39:45.808861  370050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:39:45.811118  370050 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:39:45.811566  370050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:39:45.811623  370050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:39:45.829843  370050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I1002 11:39:45.830396  370050 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:39:45.831253  370050 main.go:141] libmachine: Using API Version  1
	I1002 11:39:45.831316  370050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:39:45.831803  370050 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:39:45.831972  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:45.832208  370050 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:39:45.832583  370050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:39:45.832626  370050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:39:45.849007  370050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I1002 11:39:45.849620  370050 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:39:45.850141  370050 main.go:141] libmachine: Using API Version  1
	I1002 11:39:45.850173  370050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:39:45.850616  370050 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:39:45.850813  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:45.887348  370050 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:39:45.888913  370050 start.go:298] selected driver: kvm2
	I1002 11:39:45.888930  370050 start.go:902] validating driver "kvm2" against &{Name:pause-892275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.2 ClusterName:pause-892275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:45.889108  370050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:39:45.889428  370050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:39:45.889514  370050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:39:45.907247  370050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:39:45.907911  370050 cni.go:84] Creating CNI manager for ""
	I1002 11:39:45.907931  370050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:39:45.907948  370050 start_flags.go:321] config:
	{Name:pause-892275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-892275 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:45.908200  370050 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:39:45.910216  370050 out.go:177] * Starting control plane node pause-892275 in cluster pause-892275
	I1002 11:39:45.911706  370050 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:45.911747  370050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:39:45.911755  370050 cache.go:57] Caching tarball of preloaded images
	I1002 11:39:45.911856  370050 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:39:45.911871  370050 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:39:45.912007  370050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/config.json ...
	I1002 11:39:45.912234  370050 start.go:365] acquiring machines lock for pause-892275: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:39:46.458154  370050 start.go:369] acquired machines lock for "pause-892275" in 545.870976ms
	I1002 11:39:46.458209  370050 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:39:46.458217  370050 fix.go:54] fixHost starting: 
	I1002 11:39:46.458657  370050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:39:46.458713  370050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:39:46.477156  370050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I1002 11:39:46.477639  370050 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:39:46.478177  370050 main.go:141] libmachine: Using API Version  1
	I1002 11:39:46.478204  370050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:39:46.478580  370050 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:39:46.478748  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:46.478872  370050 main.go:141] libmachine: (pause-892275) Calling .GetState
	I1002 11:39:46.480727  370050 fix.go:102] recreateIfNeeded on pause-892275: state=Running err=<nil>
	W1002 11:39:46.480749  370050 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:39:46.482737  370050 out.go:177] * Updating the running kvm2 "pause-892275" VM ...
	I1002 11:39:46.484109  370050 machine.go:88] provisioning docker machine ...
	I1002 11:39:46.484135  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:46.484361  370050 main.go:141] libmachine: (pause-892275) Calling .GetMachineName
	I1002 11:39:46.484528  370050 buildroot.go:166] provisioning hostname "pause-892275"
	I1002 11:39:46.484552  370050 main.go:141] libmachine: (pause-892275) Calling .GetMachineName
	I1002 11:39:46.484716  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:46.486928  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.487379  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:46.487405  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.487712  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:46.487875  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:46.488027  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:46.488147  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:46.488307  370050 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:46.488692  370050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I1002 11:39:46.488708  370050 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-892275 && echo "pause-892275" | sudo tee /etc/hostname
	I1002 11:39:46.657969  370050 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-892275
	
	I1002 11:39:46.658003  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:46.660702  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.661121  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:46.661175  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.661345  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:46.661593  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:46.661817  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:46.662009  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:46.662204  370050 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:46.662690  370050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I1002 11:39:46.662722  370050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-892275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-892275/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-892275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:39:46.812089  370050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:39:46.812125  370050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:39:46.812155  370050 buildroot.go:174] setting up certificates
	I1002 11:39:46.812174  370050 provision.go:83] configureAuth start
	I1002 11:39:46.812191  370050 main.go:141] libmachine: (pause-892275) Calling .GetMachineName
	I1002 11:39:46.812574  370050 main.go:141] libmachine: (pause-892275) Calling .GetIP
	I1002 11:39:46.815541  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.816040  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:46.816078  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.816470  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:46.819149  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.819568  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:46.819617  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.819756  370050 provision.go:138] copyHostCerts
	I1002 11:39:46.819835  370050 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:39:46.819850  370050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:39:46.819922  370050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:39:46.820051  370050 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:39:46.820062  370050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:39:46.820100  370050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:39:46.820215  370050 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:39:46.820251  370050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:39:46.820293  370050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:39:46.820386  370050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.pause-892275 san=[192.168.61.203 192.168.61.203 localhost 127.0.0.1 minikube pause-892275]
	I1002 11:39:46.914472  370050 provision.go:172] copyRemoteCerts
	I1002 11:39:46.914540  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:39:46.914573  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:46.917813  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.918254  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:46.918318  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:46.918493  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:46.918696  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:46.918885  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:46.919099  370050 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/pause-892275/id_rsa Username:docker}
	I1002 11:39:47.020804  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:39:47.061938  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 11:39:47.089293  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:39:47.120901  370050 provision.go:86] duration metric: configureAuth took 308.707381ms
	I1002 11:39:47.120935  370050 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:39:47.121238  370050 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:39:47.121336  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:47.124279  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:47.124632  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:47.124669  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:47.124881  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:47.125115  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:47.125339  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:47.125530  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:47.125703  370050 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:47.126155  370050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I1002 11:39:47.126180  370050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:39:52.915403  370050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:39:52.915442  370050 machine.go:91] provisioned docker machine in 6.431316518s
	I1002 11:39:52.915456  370050 start.go:300] post-start starting for "pause-892275" (driver="kvm2")
	I1002 11:39:52.915471  370050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:39:52.915545  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:52.916066  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:39:52.916105  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:52.919265  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:52.919786  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:52.919814  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:52.919967  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:52.920166  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:52.920344  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:52.920568  370050 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/pause-892275/id_rsa Username:docker}
	I1002 11:39:53.188836  370050 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:39:53.219559  370050 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:39:53.219657  370050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:39:53.219769  370050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:39:53.219940  370050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:39:53.220105  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:39:53.264685  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:39:53.387064  370050 start.go:303] post-start completed in 471.5872ms
	I1002 11:39:53.387159  370050 fix.go:56] fixHost completed within 6.928941715s
	I1002 11:39:53.387201  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:53.390406  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.390834  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:53.390895  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.391085  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:53.391358  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:53.391639  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:53.391879  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:53.392137  370050 main.go:141] libmachine: Using SSH client type: native
	I1002 11:39:53.392621  370050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I1002 11:39:53.392636  370050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 11:39:53.630997  370050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696246793.627684780
	
	I1002 11:39:53.631088  370050 fix.go:206] guest clock: 1696246793.627684780
	I1002 11:39:53.631111  370050 fix.go:219] Guest: 2023-10-02 11:39:53.62768478 +0000 UTC Remote: 2023-10-02 11:39:53.387175983 +0000 UTC m=+7.632170034 (delta=240.508797ms)
	I1002 11:39:53.631162  370050 fix.go:190] guest clock delta is within tolerance: 240.508797ms
	I1002 11:39:53.631187  370050 start.go:83] releasing machines lock for "pause-892275", held for 7.173001697s
	I1002 11:39:53.631228  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:53.631582  370050 main.go:141] libmachine: (pause-892275) Calling .GetIP
	I1002 11:39:53.634525  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.634967  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:53.635002  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.635305  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:53.635910  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:53.636147  370050 main.go:141] libmachine: (pause-892275) Calling .DriverName
	I1002 11:39:53.636242  370050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:39:53.636290  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:53.636363  370050 ssh_runner.go:195] Run: cat /version.json
	I1002 11:39:53.636377  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHHostname
	I1002 11:39:53.639031  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.639227  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.639349  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:53.639406  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.639489  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:53.639661  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:53.639755  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:53.639784  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHPort
	I1002 11:39:53.639792  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:53.639857  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:53.639948  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHKeyPath
	I1002 11:39:53.640021  370050 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/pause-892275/id_rsa Username:docker}
	I1002 11:39:53.640111  370050 main.go:141] libmachine: (pause-892275) Calling .GetSSHUsername
	I1002 11:39:53.640262  370050 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/pause-892275/id_rsa Username:docker}
	I1002 11:39:53.748647  370050 ssh_runner.go:195] Run: systemctl --version
	I1002 11:39:53.794395  370050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:39:53.996145  370050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:39:54.010032  370050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:39:54.010133  370050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:39:54.032815  370050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 11:39:54.032847  370050 start.go:469] detecting cgroup driver to use...
	I1002 11:39:54.032960  370050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:39:54.072522  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:39:54.115790  370050 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:39:54.115866  370050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:39:54.142403  370050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:39:54.180574  370050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:39:54.411564  370050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:39:54.806651  370050 docker.go:213] disabling docker service ...
	I1002 11:39:54.806724  370050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:39:54.852724  370050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:39:54.877916  370050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:39:55.452667  370050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:39:55.743830  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:39:55.775117  370050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:39:55.817314  370050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:39:55.817388  370050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:55.840313  370050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:39:55.840396  370050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:55.864092  370050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:55.887200  370050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:39:55.911317  370050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:39:55.939892  370050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:39:55.967309  370050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:39:55.998268  370050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:39:56.299334  370050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:39:57.795827  370050 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.496448194s)
	I1002 11:39:57.795868  370050 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:39:57.795928  370050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:39:57.803215  370050 start.go:537] Will wait 60s for crictl version
	I1002 11:39:57.803287  370050 ssh_runner.go:195] Run: which crictl
	I1002 11:39:57.808265  370050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:39:57.864002  370050 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:39:57.864105  370050 ssh_runner.go:195] Run: crio --version
	I1002 11:39:57.923275  370050 ssh_runner.go:195] Run: crio --version
	I1002 11:39:57.975524  370050 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:39:57.976756  370050 main.go:141] libmachine: (pause-892275) Calling .GetIP
	I1002 11:39:57.979601  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:57.979993  370050 main.go:141] libmachine: (pause-892275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:e9:c0", ip: ""} in network mk-pause-892275: {Iface:virbr3 ExpiryTime:2023-10-02 12:38:16 +0000 UTC Type:0 Mac:52:54:00:c0:e9:c0 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-892275 Clientid:01:52:54:00:c0:e9:c0}
	I1002 11:39:57.980010  370050 main.go:141] libmachine: (pause-892275) DBG | domain pause-892275 has defined IP address 192.168.61.203 and MAC address 52:54:00:c0:e9:c0 in network mk-pause-892275
	I1002 11:39:57.980275  370050 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:39:57.984920  370050 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:39:57.984987  370050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:39:58.035294  370050 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:39:58.035319  370050 crio.go:415] Images already preloaded, skipping extraction
	I1002 11:39:58.035379  370050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:39:58.081764  370050 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:39:58.081790  370050 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:39:58.081882  370050 ssh_runner.go:195] Run: crio config
	I1002 11:39:58.210262  370050 cni.go:84] Creating CNI manager for ""
	I1002 11:39:58.210288  370050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:39:58.210307  370050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:39:58.210324  370050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.203 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-892275 NodeName:pause-892275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:39:58.210508  370050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-892275"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:39:58.210624  370050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-892275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-892275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:39:58.210706  370050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:39:58.539441  370050 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:39:58.539532  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:39:58.595977  370050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 11:39:58.623402  370050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:39:58.698595  370050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1002 11:39:58.726480  370050 ssh_runner.go:195] Run: grep 192.168.61.203	control-plane.minikube.internal$ /etc/hosts
	I1002 11:39:58.735970  370050 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275 for IP: 192.168.61.203
	I1002 11:39:58.736000  370050 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:39:58.736159  370050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:39:58.736206  370050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:39:58.736269  370050 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/client.key
	I1002 11:39:58.736351  370050 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/apiserver.key.c2278fc4
	I1002 11:39:58.736397  370050 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/proxy-client.key
	I1002 11:39:58.736510  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:39:58.736538  370050 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:39:58.736551  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:39:58.736572  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:39:58.736599  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:39:58.736620  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:39:58.736659  370050 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:39:58.737359  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:39:58.789605  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:39:58.836202  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:39:58.883084  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:39:58.927080  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:39:58.970434  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:39:59.018784  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:39:59.076439  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:39:59.134853  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:39:59.220617  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:39:59.273328  370050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:39:59.328652  370050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:39:59.357313  370050 ssh_runner.go:195] Run: openssl version
	I1002 11:39:59.366263  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:39:59.389033  370050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:59.400413  370050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:59.400503  370050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:39:59.413890  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:39:59.432423  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:39:59.452235  370050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:39:59.460291  370050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:39:59.460365  370050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:39:59.474586  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:39:59.493204  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:39:59.513068  370050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:39:59.523822  370050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:39:59.523900  370050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:39:59.537262  370050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:39:59.554283  370050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:39:59.564824  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:39:59.577999  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:39:59.589266  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:39:59.595823  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:39:59.604912  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:39:59.613215  370050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:39:59.630062  370050 kubeadm.go:404] StartCluster: {Name:pause-892275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.2 ClusterName:pause-892275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:39:59.630218  370050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:39:59.630281  370050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:39:59.727332  370050 cri.go:89] found id: "0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29"
	I1002 11:39:59.727360  370050 cri.go:89] found id: "a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30"
	I1002 11:39:59.727367  370050 cri.go:89] found id: "9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893"
	I1002 11:39:59.727373  370050 cri.go:89] found id: "1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6"
	I1002 11:39:59.727379  370050 cri.go:89] found id: "14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6"
	I1002 11:39:59.727385  370050 cri.go:89] found id: "c6e2bca052f5723b64339a4cfdde20976d14d4cd769637a4e213ea9085ae775e"
	I1002 11:39:59.727390  370050 cri.go:89] found id: ""
	I1002 11:39:59.727439  370050 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-892275 -n pause-892275
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-892275 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-892275 logs -n 25: (1.452578155s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| start   | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:36 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-819186 ssh cat     | force-systemd-flag-819186 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-819186          | force-systemd-flag-819186 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| ssh     | cert-options-045561 ssh               | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-045561 -- sudo        | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-045561                | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-083017 sudo           | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC | 02 Oct 23 11:36 UTC |
	| start   | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| stop    | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| start   | -p running-upgrade-703246             | running-upgrade-703246    | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:38 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-703246             | running-upgrade-703246    | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| start   | -p pause-892275 --memory=2048         | pause-892275              | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:39 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:38 UTC | 02 Oct 23 11:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| start   | -p auto-124285 --memory=3072          | auto-124285               | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-394393             | cert-expiration-394393    | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-892275                       | pause-892275              | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:40 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-394393             | cert-expiration-394393    | jenkins | v1.31.2 | 02 Oct 23 11:40 UTC | 02 Oct 23 11:40 UTC |
	| start   | -p kindnet-124285                     | kindnet-124285            | jenkins | v1.31.2 | 02 Oct 23 11:40 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:40:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:40:20.462755  370303 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:40:20.463061  370303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:20.463074  370303 out.go:309] Setting ErrFile to fd 2...
	I1002 11:40:20.463082  370303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:20.463371  370303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:40:20.464138  370303 out.go:303] Setting JSON to false
	I1002 11:40:20.465208  370303 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8567,"bootTime":1696238254,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:40:20.465266  370303 start.go:138] virtualization: kvm guest
	I1002 11:40:20.468748  370303 out.go:177] * [kindnet-124285] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:40:20.470296  370303 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:40:20.471925  370303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:40:20.470310  370303 notify.go:220] Checking for updates...
	I1002 11:40:20.475142  370303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:40:20.476587  370303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:20.477982  370303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:40:20.479531  370303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:40:20.481479  370303 config.go:182] Loaded profile config "auto-124285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:20.481619  370303 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:20.481675  370303 config.go:182] Loaded profile config "stopped-upgrade-204505": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:40:20.481787  370303 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:40:20.523663  370303 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 11:40:20.525212  370303 start.go:298] selected driver: kvm2
	I1002 11:40:20.525231  370303 start.go:902] validating driver "kvm2" against <nil>
	I1002 11:40:20.525247  370303 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:40:20.525925  370303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:20.526010  370303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:40:20.541178  370303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:40:20.541224  370303 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 11:40:20.541424  370303 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:40:20.541462  370303 cni.go:84] Creating CNI manager for "kindnet"
	I1002 11:40:20.541470  370303 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 11:40:20.541503  370303 start_flags.go:321] config:
	{Name:kindnet-124285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-124285 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:40:20.541620  370303 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:20.543527  370303 out.go:177] * Starting control plane node kindnet-124285 in cluster kindnet-124285
	I1002 11:40:20.544669  370303 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:40:20.544712  370303 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:40:20.544724  370303 cache.go:57] Caching tarball of preloaded images
	I1002 11:40:20.544826  370303 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:40:20.544841  370303 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:40:20.544934  370303 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/config.json ...
	I1002 11:40:20.544953  370303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/config.json: {Name:mk10d3c809bfedcd616c1689b1ae1599ed7c3186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:20.545123  370303 start.go:365] acquiring machines lock for kindnet-124285: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:40:20.545160  370303 start.go:369] acquired machines lock for "kindnet-124285" in 19.349µs
	I1002 11:40:20.545184  370303 start.go:93] Provisioning new machine with config: &{Name:kindnet-124285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:kindnet-124285 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:40:20.545283  370303 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 11:40:18.701183  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:40:18.713504  370050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:40:18.734064  370050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:40:18.748420  370050 system_pods.go:59] 6 kube-system pods found
	I1002 11:40:18.748475  370050 system_pods.go:61] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:40:18.748492  370050 system_pods.go:61] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:40:18.748503  370050 system_pods.go:61] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:40:18.748518  370050 system_pods.go:61] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:40:18.748533  370050 system_pods.go:61] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:40:18.748543  370050 system_pods.go:61] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:40:18.748554  370050 system_pods.go:74] duration metric: took 14.463666ms to wait for pod list to return data ...
	I1002 11:40:18.748568  370050 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:40:18.753459  370050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:40:18.753498  370050 node_conditions.go:123] node cpu capacity is 2
	I1002 11:40:18.753514  370050 node_conditions.go:105] duration metric: took 4.938886ms to run NodePressure ...
	I1002 11:40:18.753541  370050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:40:19.065894  370050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:40:19.070941  370050 kubeadm.go:787] kubelet initialised
	I1002 11:40:19.070972  370050 kubeadm.go:788] duration metric: took 5.049847ms waiting for restarted kubelet to initialise ...
	I1002 11:40:19.070984  370050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:19.076098  370050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:19.081753  370050 pod_ready.go:92] pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:19.081781  370050 pod_ready.go:81] duration metric: took 5.656185ms waiting for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:19.081793  370050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:16.807453  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:19.301555  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:21.305492  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:20.546875  370303 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 11:40:20.547068  370303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:40:20.547121  370303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:40:20.561158  370303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I1002 11:40:20.561559  370303 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:40:20.562082  370303 main.go:141] libmachine: Using API Version  1
	I1002 11:40:20.562106  370303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:40:20.562510  370303 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:40:20.562731  370303 main.go:141] libmachine: (kindnet-124285) Calling .GetMachineName
	I1002 11:40:20.562899  370303 main.go:141] libmachine: (kindnet-124285) Calling .DriverName
	I1002 11:40:20.563035  370303 start.go:159] libmachine.API.Create for "kindnet-124285" (driver="kvm2")
	I1002 11:40:20.563100  370303 client.go:168] LocalClient.Create starting
	I1002 11:40:20.563139  370303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 11:40:20.563182  370303 main.go:141] libmachine: Decoding PEM data...
	I1002 11:40:20.563207  370303 main.go:141] libmachine: Parsing certificate...
	I1002 11:40:20.563303  370303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 11:40:20.563331  370303 main.go:141] libmachine: Decoding PEM data...
	I1002 11:40:20.563348  370303 main.go:141] libmachine: Parsing certificate...
	I1002 11:40:20.563367  370303 main.go:141] libmachine: Running pre-create checks...
	I1002 11:40:20.563377  370303 main.go:141] libmachine: (kindnet-124285) Calling .PreCreateCheck
	I1002 11:40:20.563726  370303 main.go:141] libmachine: (kindnet-124285) Calling .GetConfigRaw
	I1002 11:40:20.564149  370303 main.go:141] libmachine: Creating machine...
	I1002 11:40:20.564167  370303 main.go:141] libmachine: (kindnet-124285) Calling .Create
	I1002 11:40:20.564310  370303 main.go:141] libmachine: (kindnet-124285) Creating KVM machine...
	I1002 11:40:20.565515  370303 main.go:141] libmachine: (kindnet-124285) DBG | found existing default KVM network
	I1002 11:40:20.566901  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.566749  370326 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:bf:72} reservation:<nil>}
	I1002 11:40:20.567818  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.567724  370326 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:03:3f} reservation:<nil>}
	I1002 11:40:20.568777  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.568703  370326 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5e:09:d2} reservation:<nil>}
	I1002 11:40:20.570012  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.569942  370326 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a9a80}
	I1002 11:40:20.575197  370303 main.go:141] libmachine: (kindnet-124285) DBG | trying to create private KVM network mk-kindnet-124285 192.168.72.0/24...
	I1002 11:40:20.651762  370303 main.go:141] libmachine: (kindnet-124285) DBG | private KVM network mk-kindnet-124285 192.168.72.0/24 created
	I1002 11:40:20.651899  370303 main.go:141] libmachine: (kindnet-124285) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 ...
	I1002 11:40:20.651946  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.651877  370326 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:20.651962  370303 main.go:141] libmachine: (kindnet-124285) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 11:40:20.652031  370303 main.go:141] libmachine: (kindnet-124285) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 11:40:20.917077  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.916959  370326 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/id_rsa...
	I1002 11:40:21.181405  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:21.181248  370326 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/kindnet-124285.rawdisk...
	I1002 11:40:21.181436  370303 main.go:141] libmachine: (kindnet-124285) DBG | Writing magic tar header
	I1002 11:40:21.181455  370303 main.go:141] libmachine: (kindnet-124285) DBG | Writing SSH key tar header
	I1002 11:40:21.181472  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:21.181353  370326 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 ...
	I1002 11:40:21.181488  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285
	I1002 11:40:21.181565  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 11:40:21.181612  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 (perms=drwx------)
	I1002 11:40:21.181625  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:21.181676  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 11:40:21.181700  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 11:40:21.181710  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 11:40:21.181727  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins
	I1002 11:40:21.181743  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home
	I1002 11:40:21.181757  370303 main.go:141] libmachine: (kindnet-124285) DBG | Skipping /home - not owner
	I1002 11:40:21.181775  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 11:40:21.181793  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 11:40:21.181836  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 11:40:21.181867  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 11:40:21.181884  370303 main.go:141] libmachine: (kindnet-124285) Creating domain...
	I1002 11:40:21.182976  370303 main.go:141] libmachine: (kindnet-124285) define libvirt domain using xml: 
	I1002 11:40:21.183006  370303 main.go:141] libmachine: (kindnet-124285) <domain type='kvm'>
	I1002 11:40:21.183019  370303 main.go:141] libmachine: (kindnet-124285)   <name>kindnet-124285</name>
	I1002 11:40:21.183029  370303 main.go:141] libmachine: (kindnet-124285)   <memory unit='MiB'>3072</memory>
	I1002 11:40:21.183043  370303 main.go:141] libmachine: (kindnet-124285)   <vcpu>2</vcpu>
	I1002 11:40:21.183055  370303 main.go:141] libmachine: (kindnet-124285)   <features>
	I1002 11:40:21.183067  370303 main.go:141] libmachine: (kindnet-124285)     <acpi/>
	I1002 11:40:21.183078  370303 main.go:141] libmachine: (kindnet-124285)     <apic/>
	I1002 11:40:21.183091  370303 main.go:141] libmachine: (kindnet-124285)     <pae/>
	I1002 11:40:21.183099  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183138  370303 main.go:141] libmachine: (kindnet-124285)   </features>
	I1002 11:40:21.183177  370303 main.go:141] libmachine: (kindnet-124285)   <cpu mode='host-passthrough'>
	I1002 11:40:21.183203  370303 main.go:141] libmachine: (kindnet-124285)   
	I1002 11:40:21.183218  370303 main.go:141] libmachine: (kindnet-124285)   </cpu>
	I1002 11:40:21.183229  370303 main.go:141] libmachine: (kindnet-124285)   <os>
	I1002 11:40:21.183243  370303 main.go:141] libmachine: (kindnet-124285)     <type>hvm</type>
	I1002 11:40:21.183254  370303 main.go:141] libmachine: (kindnet-124285)     <boot dev='cdrom'/>
	I1002 11:40:21.183267  370303 main.go:141] libmachine: (kindnet-124285)     <boot dev='hd'/>
	I1002 11:40:21.183284  370303 main.go:141] libmachine: (kindnet-124285)     <bootmenu enable='no'/>
	I1002 11:40:21.183293  370303 main.go:141] libmachine: (kindnet-124285)   </os>
	I1002 11:40:21.183301  370303 main.go:141] libmachine: (kindnet-124285)   <devices>
	I1002 11:40:21.183309  370303 main.go:141] libmachine: (kindnet-124285)     <disk type='file' device='cdrom'>
	I1002 11:40:21.183320  370303 main.go:141] libmachine: (kindnet-124285)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/boot2docker.iso'/>
	I1002 11:40:21.183329  370303 main.go:141] libmachine: (kindnet-124285)       <target dev='hdc' bus='scsi'/>
	I1002 11:40:21.183335  370303 main.go:141] libmachine: (kindnet-124285)       <readonly/>
	I1002 11:40:21.183343  370303 main.go:141] libmachine: (kindnet-124285)     </disk>
	I1002 11:40:21.183351  370303 main.go:141] libmachine: (kindnet-124285)     <disk type='file' device='disk'>
	I1002 11:40:21.183397  370303 main.go:141] libmachine: (kindnet-124285)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 11:40:21.183493  370303 main.go:141] libmachine: (kindnet-124285)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/kindnet-124285.rawdisk'/>
	I1002 11:40:21.183537  370303 main.go:141] libmachine: (kindnet-124285)       <target dev='hda' bus='virtio'/>
	I1002 11:40:21.183558  370303 main.go:141] libmachine: (kindnet-124285)     </disk>
	I1002 11:40:21.183573  370303 main.go:141] libmachine: (kindnet-124285)     <interface type='network'>
	I1002 11:40:21.183581  370303 main.go:141] libmachine: (kindnet-124285)       <source network='mk-kindnet-124285'/>
	I1002 11:40:21.183594  370303 main.go:141] libmachine: (kindnet-124285)       <model type='virtio'/>
	I1002 11:40:21.183604  370303 main.go:141] libmachine: (kindnet-124285)     </interface>
	I1002 11:40:21.183611  370303 main.go:141] libmachine: (kindnet-124285)     <interface type='network'>
	I1002 11:40:21.183619  370303 main.go:141] libmachine: (kindnet-124285)       <source network='default'/>
	I1002 11:40:21.183651  370303 main.go:141] libmachine: (kindnet-124285)       <model type='virtio'/>
	I1002 11:40:21.183676  370303 main.go:141] libmachine: (kindnet-124285)     </interface>
	I1002 11:40:21.183692  370303 main.go:141] libmachine: (kindnet-124285)     <serial type='pty'>
	I1002 11:40:21.183703  370303 main.go:141] libmachine: (kindnet-124285)       <target port='0'/>
	I1002 11:40:21.183717  370303 main.go:141] libmachine: (kindnet-124285)     </serial>
	I1002 11:40:21.183730  370303 main.go:141] libmachine: (kindnet-124285)     <console type='pty'>
	I1002 11:40:21.183743  370303 main.go:141] libmachine: (kindnet-124285)       <target type='serial' port='0'/>
	I1002 11:40:21.183761  370303 main.go:141] libmachine: (kindnet-124285)     </console>
	I1002 11:40:21.183782  370303 main.go:141] libmachine: (kindnet-124285)     <rng model='virtio'>
	I1002 11:40:21.183808  370303 main.go:141] libmachine: (kindnet-124285)       <backend model='random'>/dev/random</backend>
	I1002 11:40:21.183823  370303 main.go:141] libmachine: (kindnet-124285)     </rng>
	I1002 11:40:21.183834  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183854  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183867  370303 main.go:141] libmachine: (kindnet-124285)   </devices>
	I1002 11:40:21.183885  370303 main.go:141] libmachine: (kindnet-124285) </domain>
	I1002 11:40:21.183896  370303 main.go:141] libmachine: (kindnet-124285) 
	I1002 11:40:21.188027  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:e8:b0:00 in network default
	I1002 11:40:21.188682  370303 main.go:141] libmachine: (kindnet-124285) Ensuring networks are active...
	I1002 11:40:21.188707  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:21.189496  370303 main.go:141] libmachine: (kindnet-124285) Ensuring network default is active
	I1002 11:40:21.189995  370303 main.go:141] libmachine: (kindnet-124285) Ensuring network mk-kindnet-124285 is active
	I1002 11:40:21.190621  370303 main.go:141] libmachine: (kindnet-124285) Getting domain xml...
	I1002 11:40:21.191362  370303 main.go:141] libmachine: (kindnet-124285) Creating domain...
	I1002 11:40:22.516962  370303 main.go:141] libmachine: (kindnet-124285) Waiting to get IP...
	I1002 11:40:22.517712  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:22.518336  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:22.518386  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:22.518302  370326 retry.go:31] will retry after 189.410036ms: waiting for machine to come up
	I1002 11:40:22.709706  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:22.710251  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:22.710277  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:22.710192  370326 retry.go:31] will retry after 309.034782ms: waiting for machine to come up
	I1002 11:40:23.020627  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.021233  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.021266  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.021188  370326 retry.go:31] will retry after 372.780693ms: waiting for machine to come up
	I1002 11:40:23.395824  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.396366  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.396395  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.396301  370326 retry.go:31] will retry after 375.415276ms: waiting for machine to come up
	I1002 11:40:23.772927  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.773434  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.773469  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.773376  370326 retry.go:31] will retry after 714.473858ms: waiting for machine to come up
	I1002 11:40:24.489320  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:24.489793  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:24.489817  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:24.489737  370326 retry.go:31] will retry after 913.129752ms: waiting for machine to come up
	I1002 11:40:25.404760  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:25.405317  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:25.405356  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:25.405257  370326 retry.go:31] will retry after 846.144185ms: waiting for machine to come up
	I1002 11:40:21.101537  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:23.106766  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:25.604374  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:23.801522  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:25.814752  369698 pod_ready.go:97] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.76 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-02 11:40:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-02 11:40:15 +0000 UTC,FinishedAt:2023-10-02 11:40:25 +0000 UTC,ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c Started:0xc00159cac0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 11:40:25.814795  369698 pod_ready.go:81] duration metric: took 13.093668304s waiting for pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace to be "Ready" ...
	E1002 11:40:25.814809  369698 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.76 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-02 11:40:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-02 11:40:15 +0000 UTC,FinishedAt:2023-10-02 11:40:25 +0000 UTC,ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c Started:0xc00159cac0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 11:40:25.814832  369698 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.103026  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:28.608956  370050 pod_ready.go:92] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.608985  370050 pod_ready.go:81] duration metric: took 9.527181751s waiting for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.608998  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.618824  370050 pod_ready.go:92] pod "kube-apiserver-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.618850  370050 pod_ready.go:81] duration metric: took 9.844244ms waiting for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.618864  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.626581  370050 pod_ready.go:92] pod "kube-controller-manager-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.626613  370050 pod_ready.go:81] duration metric: took 7.729707ms waiting for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.626633  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.635132  370050 pod_ready.go:92] pod "kube-proxy-h9rtm" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.635154  370050 pod_ready.go:81] duration metric: took 8.511995ms waiting for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.635167  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.641399  370050 pod_ready.go:92] pod "kube-scheduler-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.641421  370050 pod_ready.go:81] duration metric: took 6.246029ms waiting for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.641431  370050 pod_ready.go:38] duration metric: took 9.570434191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:28.641454  370050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:40:28.653974  370050 ops.go:34] apiserver oom_adj: -16
	I1002 11:40:28.653996  370050 kubeadm.go:640] restartCluster took 28.830170134s
	I1002 11:40:28.654007  370050 kubeadm.go:406] StartCluster complete in 29.023961643s
	I1002 11:40:28.654029  370050 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:28.654116  370050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:40:28.655531  370050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:28.655824  370050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:40:28.655991  370050 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:40:28.659621  370050 out.go:177] * Enabled addons: 
	I1002 11:40:28.656145  370050 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:28.656992  370050 kapi.go:59] client config for pause-892275: &rest.Config{Host:"https://192.168.61.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:40:28.661031  370050 addons.go:502] enable addons completed in 5.049101ms: enabled=[]
	I1002 11:40:28.664158  370050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-892275" context rescaled to 1 replicas
	I1002 11:40:28.664197  370050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:40:28.665830  370050 out.go:177] * Verifying Kubernetes components...
	I1002 11:40:26.253023  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:26.253568  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:26.253594  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:26.253483  370326 retry.go:31] will retry after 1.071722283s: waiting for machine to come up
	I1002 11:40:27.326722  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:27.327265  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:27.327297  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:27.327208  370326 retry.go:31] will retry after 1.393629531s: waiting for machine to come up
	I1002 11:40:28.722921  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:28.723481  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:28.723513  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:28.723413  370326 retry.go:31] will retry after 1.735217347s: waiting for machine to come up
	I1002 11:40:30.460295  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:30.460853  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:30.460888  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:30.460773  370326 retry.go:31] will retry after 2.452036692s: waiting for machine to come up
	I1002 11:40:28.667094  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:40:28.769413  370050 node_ready.go:35] waiting up to 6m0s for node "pause-892275" to be "Ready" ...
	I1002 11:40:28.769457  370050 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:40:28.798318  370050 node_ready.go:49] node "pause-892275" has status "Ready":"True"
	I1002 11:40:28.798346  370050 node_ready.go:38] duration metric: took 28.893426ms waiting for node "pause-892275" to be "Ready" ...
	I1002 11:40:28.798383  370050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:29.000679  370050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.398702  370050 pod_ready.go:92] pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:29.398735  370050 pod_ready.go:81] duration metric: took 398.027503ms waiting for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.398751  370050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.801306  370050 pod_ready.go:92] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:29.801341  370050 pod_ready.go:81] duration metric: took 402.580896ms waiting for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.801354  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.200054  370050 pod_ready.go:92] pod "kube-apiserver-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.200098  370050 pod_ready.go:81] duration metric: took 398.735302ms waiting for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.200114  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.601041  370050 pod_ready.go:92] pod "kube-controller-manager-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.601069  370050 pod_ready.go:81] duration metric: took 400.946087ms waiting for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.601081  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:27.851417  369698 pod_ready.go:102] pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:30.351968  369698 pod_ready.go:102] pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:30.998854  370050 pod_ready.go:92] pod "kube-proxy-h9rtm" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.998881  370050 pod_ready.go:81] duration metric: took 397.791788ms waiting for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.998894  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:31.401358  370050 pod_ready.go:92] pod "kube-scheduler-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:31.401383  370050 pod_ready.go:81] duration metric: took 402.481703ms waiting for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:31.401391  370050 pod_ready.go:38] duration metric: took 2.602996119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:31.401412  370050 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:40:31.401470  370050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:40:31.418527  370050 api_server.go:72] duration metric: took 2.754292051s to wait for apiserver process to appear ...
	I1002 11:40:31.418561  370050 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:40:31.418585  370050 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I1002 11:40:31.427123  370050 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I1002 11:40:31.428838  370050 api_server.go:141] control plane version: v1.28.2
	I1002 11:40:31.428863  370050 api_server.go:131] duration metric: took 10.292083ms to wait for apiserver health ...
	I1002 11:40:31.428874  370050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:40:31.601650  370050 system_pods.go:59] 6 kube-system pods found
	I1002 11:40:31.601692  370050 system_pods.go:61] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running
	I1002 11:40:31.601700  370050 system_pods.go:61] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running
	I1002 11:40:31.601709  370050 system_pods.go:61] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running
	I1002 11:40:31.601715  370050 system_pods.go:61] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running
	I1002 11:40:31.601722  370050 system_pods.go:61] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running
	I1002 11:40:31.601728  370050 system_pods.go:61] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running
	I1002 11:40:31.601737  370050 system_pods.go:74] duration metric: took 172.855801ms to wait for pod list to return data ...
	I1002 11:40:31.601747  370050 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:40:31.813739  370050 default_sa.go:45] found service account: "default"
	I1002 11:40:31.813837  370050 default_sa.go:55] duration metric: took 212.075355ms for default service account to be created ...
	I1002 11:40:31.813863  370050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:40:32.001829  370050 system_pods.go:86] 6 kube-system pods found
	I1002 11:40:32.001869  370050 system_pods.go:89] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running
	I1002 11:40:32.001878  370050 system_pods.go:89] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running
	I1002 11:40:32.001885  370050 system_pods.go:89] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running
	I1002 11:40:32.001892  370050 system_pods.go:89] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running
	I1002 11:40:32.001899  370050 system_pods.go:89] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running
	I1002 11:40:32.001904  370050 system_pods.go:89] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running
	I1002 11:40:32.001915  370050 system_pods.go:126] duration metric: took 188.034831ms to wait for k8s-apps to be running ...
	I1002 11:40:32.001941  370050 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:40:32.002001  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:40:32.016812  370050 system_svc.go:56] duration metric: took 14.858534ms WaitForService to wait for kubelet.
	I1002 11:40:32.016846  370050 kubeadm.go:581] duration metric: took 3.352623583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:40:32.016872  370050 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:40:32.199137  370050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:40:32.199169  370050 node_conditions.go:123] node cpu capacity is 2
	I1002 11:40:32.199184  370050 node_conditions.go:105] duration metric: took 182.305653ms to run NodePressure ...
	I1002 11:40:32.199198  370050 start.go:228] waiting for startup goroutines ...
	I1002 11:40:32.199206  370050 start.go:233] waiting for cluster config update ...
	I1002 11:40:32.199214  370050 start.go:242] writing updated cluster config ...
	I1002 11:40:32.199586  370050 ssh_runner.go:195] Run: rm -f paused
	I1002 11:40:32.258910  370050 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:40:32.261752  370050 out.go:177] * Done! kubectl is now configured to use "pause-892275" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:38:12 UTC, ends at Mon 2023-10-02 11:40:33 UTC. --
	Oct 02 11:40:32 pause-892275 crio[2601]: time="2023-10-02 11:40:32.999286675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246832999260871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=06b282b4-d29e-46ff-a958-c75790df6a1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.004912169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fedaf6bb-9e02-425d-ad0a-7ef0219398c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.005039299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fedaf6bb-9e02-425d-ad0a-7ef0219398c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.005377412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fedaf6bb-9e02-425d-ad0a-7ef0219398c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.058398085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8366a86b-2e09-41c0-ae7e-6e06262c26cf name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.058520096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8366a86b-2e09-41c0-ae7e-6e06262c26cf name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.060455779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a38b3a6b-5a70-4312-98e0-6a28aa793246 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.061245991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246833061224704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=a38b3a6b-5a70-4312-98e0-6a28aa793246 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.063108132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2485db12-b334-4d06-846c-a914eae8c05c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.063275136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2485db12-b334-4d06-846c-a914eae8c05c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.063617144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2485db12-b334-4d06-846c-a914eae8c05c name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.109927268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c78633f6-eea0-49ec-ac1f-17e081372f0f name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.110077187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c78633f6-eea0-49ec-ac1f-17e081372f0f name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.111151149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d3a317eb-2a3f-4e09-8738-a5f45e481964 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.111657757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246833111642655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=d3a317eb-2a3f-4e09-8738-a5f45e481964 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.112236332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad1b437d-1021-4695-892a-a3e9088315d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.112310841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad1b437d-1021-4695-892a-a3e9088315d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.112606705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad1b437d-1021-4695-892a-a3e9088315d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.158057378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a2ea612b-3abc-4700-9379-c997aa50adcd name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.158116128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a2ea612b-3abc-4700-9379-c997aa50adcd name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.160071589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57dd49b3-1ecc-4073-abdd-d9d3287a067c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.160481296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246833160467470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=57dd49b3-1ecc-4073-abdd-d9d3287a067c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.161097946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=44bc01f5-3168-43e6-842a-5b03c5503a8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.161167730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=44bc01f5-3168-43e6-842a-5b03c5503a8b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:33 pause-892275 crio[2601]: time="2023-10-02 11:40:33.161541548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=44bc01f5-3168-43e6-842a-5b03c5503a8b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e056231c4dab5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 seconds ago      Running             coredns                   2                   a8e1583dec1a8       coredns-5dd5756b68-4wp2m
	97a36b6719cdf       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   15 seconds ago      Running             kube-proxy                2                   e64f3caa15a51       kube-proxy-h9rtm
	7ad66ee2a5f6d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   20 seconds ago      Running             kube-apiserver            2                   a2553e3185728       kube-apiserver-pause-892275
	47e1e4107ae31       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   20 seconds ago      Running             kube-scheduler            2                   a1596a37cbb74       kube-scheduler-pause-892275
	8b7f3576c7235       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   20 seconds ago      Running             etcd                      2                   79a4c71f4945d       etcd-pause-892275
	e9f72cb018c25       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   21 seconds ago      Running             kube-controller-manager   2                   ac0b5c67a83a6       kube-controller-manager-pause-892275
	c5cafeee47dfc       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   32 seconds ago      Exited              kube-proxy                1                   e64f3caa15a51       kube-proxy-h9rtm
	0ca4784b61805       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   37 seconds ago      Exited              coredns                   1                   15b68d076f5e4       coredns-5dd5756b68-4wp2m
	a97af4b5192af       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   37 seconds ago      Exited              etcd                      1                   5d23fa1e437fa       etcd-pause-892275
	9e78210aab09d       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   38 seconds ago      Exited              kube-controller-manager   1                   6fad838b7d3ee       kube-controller-manager-pause-892275
	1947700b5aee5       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   38 seconds ago      Exited              kube-scheduler            1                   9d3f771c04ca9       kube-scheduler-pause-892275
	14dadcd321818       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   39 seconds ago      Exited              kube-apiserver            1                   5d657a476eeb8       kube-apiserver-pause-892275
	
	* 
	* ==> coredns [0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29] <==
	* 
	* 
	* ==> coredns [e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48079 - 52002 "HINFO IN 2874011237112364441.1389752061937950605. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012240445s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-892275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-892275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=pause-892275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_38_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-892275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-892275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 057e80e18df541c1876eb3ad2541b9e8
	  System UUID:                057e80e1-8df5-41c1-876e-b3ad2541b9e8
	  Boot ID:                    f90c013b-1d34-4b3c-950f-d55377e21595
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4wp2m                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                 etcd-pause-892275                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kube-apiserver-pause-892275             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-pause-892275    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-h9rtm                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-pause-892275             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)  kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                103s                 kubelet          Node pause-892275 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                  node-controller  Node pause-892275 event: Registered Node pause-892275 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node pause-892275 event: Registered Node pause-892275 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696530] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.951494] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.172759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.366171] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.154860] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.172728] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.135004] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.285055] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.986795] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[  +9.823650] systemd-fstab-generator[1257]: Ignoring "noauto" for root device
	[Oct 2 11:39] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.758882] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.242465] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.762195] systemd-fstab-generator[2329]: Ignoring "noauto" for root device
	[  +0.336104] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[  +0.515817] systemd-fstab-generator[2476]: Ignoring "noauto" for root device
	[Oct 2 11:40] systemd-fstab-generator[3217]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d] <==
	* {"level":"info","ts":"2023-10-02T11:40:13.794959Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:40:13.79497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:40:13.795184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2023-10-02T11:40:13.795228Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2023-10-02T11:40:13.795301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:13.795324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:13.798484Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T11:40:13.801119Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T11:40:13.801406Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2023-10-02T11:40:13.801443Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2023-10-02T11:40:13.801212Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T11:40:14.855939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.856114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.856132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.85615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.865312Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-892275 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:40:14.866915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:14.868545Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2023-10-02T11:40:14.868758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:14.878049Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:40:14.885846Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:40:14.885936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30] <==
	* 
	* 
	* ==> kernel <==
	*  11:40:33 up 2 min,  0 users,  load average: 1.76, 0.71, 0.26
	Linux pause-892275 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6] <==
	* I1002 11:39:55.687460       1 options.go:220] external host was not specified, using 192.168.61.203
	I1002 11:39:55.688918       1 server.go:148] Version: v1.28.2
	I1002 11:39:55.689011       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:39:56.758693       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I1002 11:39:56.768536       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1002 11:39:56.768650       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 11:39:56.768957       1 instance.go:298] Using reconciler: lease
	W1002 11:39:56.770685       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32] <==
	* I1002 11:40:17.196053       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 11:40:17.197057       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1002 11:40:17.197110       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1002 11:40:17.347021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 11:40:17.361565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 11:40:17.378642       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 11:40:17.378758       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 11:40:17.378854       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 11:40:17.379378       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 11:40:17.380544       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 11:40:17.383243       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 11:40:17.397910       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 11:40:17.397940       1 aggregator.go:166] initial CRD sync complete...
	I1002 11:40:17.397952       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 11:40:17.397958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 11:40:17.397963       1 cache.go:39] Caches are synced for autoregister controller
	E1002 11:40:17.432283       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 11:40:18.194335       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 11:40:18.925585       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 11:40:18.941396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 11:40:19.005969       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 11:40:19.045929       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:40:19.054884       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 11:40:30.204659       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 11:40:30.337866       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893] <==
	* 
	* 
	* ==> kube-controller-manager [e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825] <==
	* I1002 11:40:30.290053       1 shared_informer.go:318] Caches are synced for GC
	I1002 11:40:30.299594       1 shared_informer.go:318] Caches are synced for node
	I1002 11:40:30.299834       1 range_allocator.go:174] "Sending events to api server"
	I1002 11:40:30.299875       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1002 11:40:30.299883       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1002 11:40:30.299892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1002 11:40:30.302302       1 shared_informer.go:318] Caches are synced for taint
	I1002 11:40:30.302445       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1002 11:40:30.302572       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1002 11:40:30.302602       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-892275"
	I1002 11:40:30.302706       1 taint_manager.go:211] "Sending events to api server"
	I1002 11:40:30.302709       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1002 11:40:30.303421       1 event.go:307] "Event occurred" object="pause-892275" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-892275 event: Registered Node pause-892275 in Controller"
	I1002 11:40:30.305968       1 shared_informer.go:318] Caches are synced for persistent volume
	I1002 11:40:30.312166       1 shared_informer.go:318] Caches are synced for TTL
	I1002 11:40:30.319578       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1002 11:40:30.324753       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 11:40:30.334878       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 11:40:30.337341       1 shared_informer.go:318] Caches are synced for attach detach
	I1002 11:40:30.373722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1002 11:40:30.382234       1 shared_informer.go:318] Caches are synced for disruption
	I1002 11:40:30.395852       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 11:40:30.755422       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 11:40:30.801137       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 11:40:30.801190       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6] <==
	* I1002 11:40:18.061708       1 server_others.go:69] "Using iptables proxy"
	I1002 11:40:18.079402       1 node.go:141] Successfully retrieved node IP: 192.168.61.203
	I1002 11:40:18.120029       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:40:18.120082       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:40:18.123074       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:40:18.123520       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:40:18.124293       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:40:18.124342       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:40:18.127113       1 config.go:188] "Starting service config controller"
	I1002 11:40:18.127158       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:40:18.127197       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:40:18.127740       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:40:18.127492       1 config.go:315] "Starting node config controller"
	I1002 11:40:18.128292       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:40:18.228880       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:40:18.228967       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:40:18.229258       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b] <==
	* I1002 11:40:00.456845       1 server_others.go:69] "Using iptables proxy"
	E1002 11:40:00.460385       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:01.603311       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:03.902454       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:08.207910       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6] <==
	* 
	* 
	* ==> kube-scheduler [47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997] <==
	* I1002 11:40:15.567514       1 serving.go:348] Generated self-signed cert in-memory
	W1002 11:40:17.320639       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 11:40:17.320738       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:40:17.320856       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 11:40:17.320898       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:40:17.357479       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 11:40:17.357535       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:40:17.359517       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 11:40:17.359711       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 11:40:17.359740       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:40:17.359753       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 11:40:17.460259       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:38:12 UTC, ends at Mon 2023-10-02 11:40:34 UTC. --
	Oct 02 11:40:12 pause-892275 kubelet[3223]: I1002 11:40:12.193924    3223 scope.go:117] "RemoveContainer" containerID="1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.456165    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.456225    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.565298    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-892275&limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.565380    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-892275&limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.586278    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.586355    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.845425    3223 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-892275?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="1.6s"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.922595    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.922677    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: I1002 11:40:12.962167    3223 kubelet_node_status.go:70] "Attempting to register node" node="pause-892275"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.962691    3223 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-892275"
	Oct 02 11:40:14 pause-892275 kubelet[3223]: I1002 11:40:14.564612    3223 kubelet_node_status.go:70] "Attempting to register node" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.417414    3223 apiserver.go:52] "Watching apiserver"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.425764    3223 topology_manager.go:215] "Topology Admit Handler" podUID="82952868-5c0c-4b75-a974-3d22d51657f1" podNamespace="kube-system" podName="kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.426047    3223 topology_manager.go:215] "Topology Admit Handler" podUID="83150d98-2463-4dd5-ab60-18ea97aa0fbf" podNamespace="kube-system" podName="coredns-5dd5756b68-4wp2m"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.427027    3223 kubelet_node_status.go:108] "Node was previously registered" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.427170    3223 kubelet_node_status.go:73] "Successfully registered node" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.440977    3223 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.445126    3223 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.446630    3223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.529670    3223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82952868-5c0c-4b75-a974-3d22d51657f1-xtables-lock\") pod \"kube-proxy-h9rtm\" (UID: \"82952868-5c0c-4b75-a974-3d22d51657f1\") " pod="kube-system/kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.530026    3223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82952868-5c0c-4b75-a974-3d22d51657f1-lib-modules\") pod \"kube-proxy-h9rtm\" (UID: \"82952868-5c0c-4b75-a974-3d22d51657f1\") " pod="kube-system/kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.727613    3223 scope.go:117] "RemoveContainer" containerID="0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.728194    3223 scope.go:117] "RemoveContainer" containerID="c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-892275 -n pause-892275
helpers_test.go:261: (dbg) Run:  kubectl --context pause-892275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-892275 -n pause-892275
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-892275 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-892275 logs -n 25: (1.428016592s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| start   | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:36 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-819186 ssh cat     | force-systemd-flag-819186 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-819186          | force-systemd-flag-819186 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| ssh     | cert-options-045561 ssh               | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-045561 -- sudo        | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-045561                | cert-options-045561       | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:35 UTC |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:35 UTC | 02 Oct 23 11:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-083017 sudo           | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC | 02 Oct 23 11:36 UTC |
	| start   | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:36 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-083017                | NoKubernetes-083017       | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| stop    | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| start   | -p running-upgrade-703246             | running-upgrade-703246    | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:38 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-703246             | running-upgrade-703246    | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:37 UTC |
	| start   | -p pause-892275 --memory=2048         | pause-892275              | jenkins | v1.31.2 | 02 Oct 23 11:37 UTC | 02 Oct 23 11:39 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:38 UTC | 02 Oct 23 11:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-613769          | kubernetes-upgrade-613769 | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:39 UTC |
	| start   | -p auto-124285 --memory=3072          | auto-124285               | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-394393             | cert-expiration-394393    | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-892275                       | pause-892275              | jenkins | v1.31.2 | 02 Oct 23 11:39 UTC | 02 Oct 23 11:40 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-394393             | cert-expiration-394393    | jenkins | v1.31.2 | 02 Oct 23 11:40 UTC | 02 Oct 23 11:40 UTC |
	| start   | -p kindnet-124285                     | kindnet-124285            | jenkins | v1.31.2 | 02 Oct 23 11:40 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:40:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:40:20.462755  370303 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:40:20.463061  370303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:20.463074  370303 out.go:309] Setting ErrFile to fd 2...
	I1002 11:40:20.463082  370303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:40:20.463371  370303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:40:20.464138  370303 out.go:303] Setting JSON to false
	I1002 11:40:20.465208  370303 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8567,"bootTime":1696238254,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:40:20.465266  370303 start.go:138] virtualization: kvm guest
	I1002 11:40:20.468748  370303 out.go:177] * [kindnet-124285] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:40:20.470296  370303 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:40:20.471925  370303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:40:20.470310  370303 notify.go:220] Checking for updates...
	I1002 11:40:20.475142  370303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:40:20.476587  370303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:20.477982  370303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:40:20.479531  370303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:40:20.481479  370303 config.go:182] Loaded profile config "auto-124285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:20.481619  370303 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:20.481675  370303 config.go:182] Loaded profile config "stopped-upgrade-204505": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1002 11:40:20.481787  370303 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:40:20.523663  370303 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 11:40:20.525212  370303 start.go:298] selected driver: kvm2
	I1002 11:40:20.525231  370303 start.go:902] validating driver "kvm2" against <nil>
	I1002 11:40:20.525247  370303 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:40:20.525925  370303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:20.526010  370303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:40:20.541178  370303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:40:20.541224  370303 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 11:40:20.541424  370303 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:40:20.541462  370303 cni.go:84] Creating CNI manager for "kindnet"
	I1002 11:40:20.541470  370303 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 11:40:20.541503  370303 start_flags.go:321] config:
	{Name:kindnet-124285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-124285 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:40:20.541620  370303 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:40:20.543527  370303 out.go:177] * Starting control plane node kindnet-124285 in cluster kindnet-124285
	I1002 11:40:20.544669  370303 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:40:20.544712  370303 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:40:20.544724  370303 cache.go:57] Caching tarball of preloaded images
	I1002 11:40:20.544826  370303 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:40:20.544841  370303 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:40:20.544934  370303 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/config.json ...
	I1002 11:40:20.544953  370303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/config.json: {Name:mk10d3c809bfedcd616c1689b1ae1599ed7c3186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:20.545123  370303 start.go:365] acquiring machines lock for kindnet-124285: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:40:20.545160  370303 start.go:369] acquired machines lock for "kindnet-124285" in 19.349µs
	I1002 11:40:20.545184  370303 start.go:93] Provisioning new machine with config: &{Name:kindnet-124285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:kindnet-124285 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:40:20.545283  370303 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 11:40:18.701183  370050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:40:18.713504  370050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:40:18.734064  370050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:40:18.748420  370050 system_pods.go:59] 6 kube-system pods found
	I1002 11:40:18.748475  370050 system_pods.go:61] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:40:18.748492  370050 system_pods.go:61] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:40:18.748503  370050 system_pods.go:61] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:40:18.748518  370050 system_pods.go:61] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:40:18.748533  370050 system_pods.go:61] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:40:18.748543  370050 system_pods.go:61] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:40:18.748554  370050 system_pods.go:74] duration metric: took 14.463666ms to wait for pod list to return data ...
	I1002 11:40:18.748568  370050 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:40:18.753459  370050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:40:18.753498  370050 node_conditions.go:123] node cpu capacity is 2
	I1002 11:40:18.753514  370050 node_conditions.go:105] duration metric: took 4.938886ms to run NodePressure ...
	I1002 11:40:18.753541  370050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:40:19.065894  370050 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:40:19.070941  370050 kubeadm.go:787] kubelet initialised
	I1002 11:40:19.070972  370050 kubeadm.go:788] duration metric: took 5.049847ms waiting for restarted kubelet to initialise ...
	I1002 11:40:19.070984  370050 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:19.076098  370050 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:19.081753  370050 pod_ready.go:92] pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:19.081781  370050 pod_ready.go:81] duration metric: took 5.656185ms waiting for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:19.081793  370050 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:16.807453  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:19.301555  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:21.305492  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:20.546875  370303 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 11:40:20.547068  370303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:40:20.547121  370303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:40:20.561158  370303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33895
	I1002 11:40:20.561559  370303 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:40:20.562082  370303 main.go:141] libmachine: Using API Version  1
	I1002 11:40:20.562106  370303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:40:20.562510  370303 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:40:20.562731  370303 main.go:141] libmachine: (kindnet-124285) Calling .GetMachineName
	I1002 11:40:20.562899  370303 main.go:141] libmachine: (kindnet-124285) Calling .DriverName
	I1002 11:40:20.563035  370303 start.go:159] libmachine.API.Create for "kindnet-124285" (driver="kvm2")
	I1002 11:40:20.563100  370303 client.go:168] LocalClient.Create starting
	I1002 11:40:20.563139  370303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 11:40:20.563182  370303 main.go:141] libmachine: Decoding PEM data...
	I1002 11:40:20.563207  370303 main.go:141] libmachine: Parsing certificate...
	I1002 11:40:20.563303  370303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 11:40:20.563331  370303 main.go:141] libmachine: Decoding PEM data...
	I1002 11:40:20.563348  370303 main.go:141] libmachine: Parsing certificate...
	I1002 11:40:20.563367  370303 main.go:141] libmachine: Running pre-create checks...
	I1002 11:40:20.563377  370303 main.go:141] libmachine: (kindnet-124285) Calling .PreCreateCheck
	I1002 11:40:20.563726  370303 main.go:141] libmachine: (kindnet-124285) Calling .GetConfigRaw
	I1002 11:40:20.564149  370303 main.go:141] libmachine: Creating machine...
	I1002 11:40:20.564167  370303 main.go:141] libmachine: (kindnet-124285) Calling .Create
	I1002 11:40:20.564310  370303 main.go:141] libmachine: (kindnet-124285) Creating KVM machine...
	I1002 11:40:20.565515  370303 main.go:141] libmachine: (kindnet-124285) DBG | found existing default KVM network
	I1002 11:40:20.566901  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.566749  370326 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:bf:72} reservation:<nil>}
	I1002 11:40:20.567818  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.567724  370326 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:03:3f} reservation:<nil>}
	I1002 11:40:20.568777  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.568703  370326 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:5e:09:d2} reservation:<nil>}
	I1002 11:40:20.570012  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.569942  370326 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a9a80}
	I1002 11:40:20.575197  370303 main.go:141] libmachine: (kindnet-124285) DBG | trying to create private KVM network mk-kindnet-124285 192.168.72.0/24...
	I1002 11:40:20.651762  370303 main.go:141] libmachine: (kindnet-124285) DBG | private KVM network mk-kindnet-124285 192.168.72.0/24 created
	I1002 11:40:20.651899  370303 main.go:141] libmachine: (kindnet-124285) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 ...
	I1002 11:40:20.651946  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.651877  370326 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:20.651962  370303 main.go:141] libmachine: (kindnet-124285) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 11:40:20.652031  370303 main.go:141] libmachine: (kindnet-124285) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 11:40:20.917077  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:20.916959  370326 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/id_rsa...
	I1002 11:40:21.181405  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:21.181248  370326 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/kindnet-124285.rawdisk...
	I1002 11:40:21.181436  370303 main.go:141] libmachine: (kindnet-124285) DBG | Writing magic tar header
	I1002 11:40:21.181455  370303 main.go:141] libmachine: (kindnet-124285) DBG | Writing SSH key tar header
	I1002 11:40:21.181472  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:21.181353  370326 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 ...
	I1002 11:40:21.181488  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285
	I1002 11:40:21.181565  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 11:40:21.181612  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285 (perms=drwx------)
	I1002 11:40:21.181625  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:40:21.181676  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 11:40:21.181700  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 11:40:21.181710  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 11:40:21.181727  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home/jenkins
	I1002 11:40:21.181743  370303 main.go:141] libmachine: (kindnet-124285) DBG | Checking permissions on dir: /home
	I1002 11:40:21.181757  370303 main.go:141] libmachine: (kindnet-124285) DBG | Skipping /home - not owner
	I1002 11:40:21.181775  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 11:40:21.181793  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 11:40:21.181836  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 11:40:21.181867  370303 main.go:141] libmachine: (kindnet-124285) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 11:40:21.181884  370303 main.go:141] libmachine: (kindnet-124285) Creating domain...
	I1002 11:40:21.182976  370303 main.go:141] libmachine: (kindnet-124285) define libvirt domain using xml: 
	I1002 11:40:21.183006  370303 main.go:141] libmachine: (kindnet-124285) <domain type='kvm'>
	I1002 11:40:21.183019  370303 main.go:141] libmachine: (kindnet-124285)   <name>kindnet-124285</name>
	I1002 11:40:21.183029  370303 main.go:141] libmachine: (kindnet-124285)   <memory unit='MiB'>3072</memory>
	I1002 11:40:21.183043  370303 main.go:141] libmachine: (kindnet-124285)   <vcpu>2</vcpu>
	I1002 11:40:21.183055  370303 main.go:141] libmachine: (kindnet-124285)   <features>
	I1002 11:40:21.183067  370303 main.go:141] libmachine: (kindnet-124285)     <acpi/>
	I1002 11:40:21.183078  370303 main.go:141] libmachine: (kindnet-124285)     <apic/>
	I1002 11:40:21.183091  370303 main.go:141] libmachine: (kindnet-124285)     <pae/>
	I1002 11:40:21.183099  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183138  370303 main.go:141] libmachine: (kindnet-124285)   </features>
	I1002 11:40:21.183177  370303 main.go:141] libmachine: (kindnet-124285)   <cpu mode='host-passthrough'>
	I1002 11:40:21.183203  370303 main.go:141] libmachine: (kindnet-124285)   
	I1002 11:40:21.183218  370303 main.go:141] libmachine: (kindnet-124285)   </cpu>
	I1002 11:40:21.183229  370303 main.go:141] libmachine: (kindnet-124285)   <os>
	I1002 11:40:21.183243  370303 main.go:141] libmachine: (kindnet-124285)     <type>hvm</type>
	I1002 11:40:21.183254  370303 main.go:141] libmachine: (kindnet-124285)     <boot dev='cdrom'/>
	I1002 11:40:21.183267  370303 main.go:141] libmachine: (kindnet-124285)     <boot dev='hd'/>
	I1002 11:40:21.183284  370303 main.go:141] libmachine: (kindnet-124285)     <bootmenu enable='no'/>
	I1002 11:40:21.183293  370303 main.go:141] libmachine: (kindnet-124285)   </os>
	I1002 11:40:21.183301  370303 main.go:141] libmachine: (kindnet-124285)   <devices>
	I1002 11:40:21.183309  370303 main.go:141] libmachine: (kindnet-124285)     <disk type='file' device='cdrom'>
	I1002 11:40:21.183320  370303 main.go:141] libmachine: (kindnet-124285)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/boot2docker.iso'/>
	I1002 11:40:21.183329  370303 main.go:141] libmachine: (kindnet-124285)       <target dev='hdc' bus='scsi'/>
	I1002 11:40:21.183335  370303 main.go:141] libmachine: (kindnet-124285)       <readonly/>
	I1002 11:40:21.183343  370303 main.go:141] libmachine: (kindnet-124285)     </disk>
	I1002 11:40:21.183351  370303 main.go:141] libmachine: (kindnet-124285)     <disk type='file' device='disk'>
	I1002 11:40:21.183397  370303 main.go:141] libmachine: (kindnet-124285)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 11:40:21.183493  370303 main.go:141] libmachine: (kindnet-124285)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/kindnet-124285/kindnet-124285.rawdisk'/>
	I1002 11:40:21.183537  370303 main.go:141] libmachine: (kindnet-124285)       <target dev='hda' bus='virtio'/>
	I1002 11:40:21.183558  370303 main.go:141] libmachine: (kindnet-124285)     </disk>
	I1002 11:40:21.183573  370303 main.go:141] libmachine: (kindnet-124285)     <interface type='network'>
	I1002 11:40:21.183581  370303 main.go:141] libmachine: (kindnet-124285)       <source network='mk-kindnet-124285'/>
	I1002 11:40:21.183594  370303 main.go:141] libmachine: (kindnet-124285)       <model type='virtio'/>
	I1002 11:40:21.183604  370303 main.go:141] libmachine: (kindnet-124285)     </interface>
	I1002 11:40:21.183611  370303 main.go:141] libmachine: (kindnet-124285)     <interface type='network'>
	I1002 11:40:21.183619  370303 main.go:141] libmachine: (kindnet-124285)       <source network='default'/>
	I1002 11:40:21.183651  370303 main.go:141] libmachine: (kindnet-124285)       <model type='virtio'/>
	I1002 11:40:21.183676  370303 main.go:141] libmachine: (kindnet-124285)     </interface>
	I1002 11:40:21.183692  370303 main.go:141] libmachine: (kindnet-124285)     <serial type='pty'>
	I1002 11:40:21.183703  370303 main.go:141] libmachine: (kindnet-124285)       <target port='0'/>
	I1002 11:40:21.183717  370303 main.go:141] libmachine: (kindnet-124285)     </serial>
	I1002 11:40:21.183730  370303 main.go:141] libmachine: (kindnet-124285)     <console type='pty'>
	I1002 11:40:21.183743  370303 main.go:141] libmachine: (kindnet-124285)       <target type='serial' port='0'/>
	I1002 11:40:21.183761  370303 main.go:141] libmachine: (kindnet-124285)     </console>
	I1002 11:40:21.183782  370303 main.go:141] libmachine: (kindnet-124285)     <rng model='virtio'>
	I1002 11:40:21.183808  370303 main.go:141] libmachine: (kindnet-124285)       <backend model='random'>/dev/random</backend>
	I1002 11:40:21.183823  370303 main.go:141] libmachine: (kindnet-124285)     </rng>
	I1002 11:40:21.183834  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183854  370303 main.go:141] libmachine: (kindnet-124285)     
	I1002 11:40:21.183867  370303 main.go:141] libmachine: (kindnet-124285)   </devices>
	I1002 11:40:21.183885  370303 main.go:141] libmachine: (kindnet-124285) </domain>
	I1002 11:40:21.183896  370303 main.go:141] libmachine: (kindnet-124285) 
	I1002 11:40:21.188027  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:e8:b0:00 in network default
	I1002 11:40:21.188682  370303 main.go:141] libmachine: (kindnet-124285) Ensuring networks are active...
	I1002 11:40:21.188707  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:21.189496  370303 main.go:141] libmachine: (kindnet-124285) Ensuring network default is active
	I1002 11:40:21.189995  370303 main.go:141] libmachine: (kindnet-124285) Ensuring network mk-kindnet-124285 is active
	I1002 11:40:21.190621  370303 main.go:141] libmachine: (kindnet-124285) Getting domain xml...
	I1002 11:40:21.191362  370303 main.go:141] libmachine: (kindnet-124285) Creating domain...
	I1002 11:40:22.516962  370303 main.go:141] libmachine: (kindnet-124285) Waiting to get IP...
	I1002 11:40:22.517712  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:22.518336  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:22.518386  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:22.518302  370326 retry.go:31] will retry after 189.410036ms: waiting for machine to come up
	I1002 11:40:22.709706  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:22.710251  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:22.710277  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:22.710192  370326 retry.go:31] will retry after 309.034782ms: waiting for machine to come up
	I1002 11:40:23.020627  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.021233  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.021266  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.021188  370326 retry.go:31] will retry after 372.780693ms: waiting for machine to come up
	I1002 11:40:23.395824  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.396366  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.396395  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.396301  370326 retry.go:31] will retry after 375.415276ms: waiting for machine to come up
	I1002 11:40:23.772927  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:23.773434  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:23.773469  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:23.773376  370326 retry.go:31] will retry after 714.473858ms: waiting for machine to come up
	I1002 11:40:24.489320  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:24.489793  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:24.489817  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:24.489737  370326 retry.go:31] will retry after 913.129752ms: waiting for machine to come up
	I1002 11:40:25.404760  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:25.405317  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:25.405356  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:25.405257  370326 retry.go:31] will retry after 846.144185ms: waiting for machine to come up
	I1002 11:40:21.101537  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:23.106766  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:25.604374  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:23.801522  369698 pod_ready.go:102] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:25.814752  369698 pod_ready.go:97] pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.76 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-02 11:40:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-02 11:40:15 +0000 UTC,FinishedAt:2023-10-02 11:40:25 +0000 UTC,ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c Started:0xc00159cac0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 11:40:25.814795  369698 pod_ready.go:81] duration metric: took 13.093668304s waiting for pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace to be "Ready" ...
	E1002 11:40:25.814809  369698 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-2phr6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 11:40:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.76 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-10-02 11:40:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-10-02 11:40:15 +0000 UTC,FinishedAt:2023-10-02 11:40:25 +0000 UTC,ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f0e8734f1d44c9de69cec2f9e51ad07d190b57a28ea9d061449818a731bfee5c Started:0xc00159cac0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 11:40:25.814832  369698 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.103026  370050 pod_ready.go:102] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:28.608956  370050 pod_ready.go:92] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.608985  370050 pod_ready.go:81] duration metric: took 9.527181751s waiting for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.608998  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.618824  370050 pod_ready.go:92] pod "kube-apiserver-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.618850  370050 pod_ready.go:81] duration metric: took 9.844244ms waiting for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.618864  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.626581  370050 pod_ready.go:92] pod "kube-controller-manager-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.626613  370050 pod_ready.go:81] duration metric: took 7.729707ms waiting for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.626633  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.635132  370050 pod_ready.go:92] pod "kube-proxy-h9rtm" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.635154  370050 pod_ready.go:81] duration metric: took 8.511995ms waiting for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.635167  370050 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.641399  370050 pod_ready.go:92] pod "kube-scheduler-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:28.641421  370050 pod_ready.go:81] duration metric: took 6.246029ms waiting for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:28.641431  370050 pod_ready.go:38] duration metric: took 9.570434191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:28.641454  370050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:40:28.653974  370050 ops.go:34] apiserver oom_adj: -16
	I1002 11:40:28.653996  370050 kubeadm.go:640] restartCluster took 28.830170134s
	I1002 11:40:28.654007  370050 kubeadm.go:406] StartCluster complete in 29.023961643s
	I1002 11:40:28.654029  370050 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:28.654116  370050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:40:28.655531  370050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:40:28.655824  370050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:40:28.655991  370050 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:40:28.659621  370050 out.go:177] * Enabled addons: 
	I1002 11:40:28.656145  370050 config.go:182] Loaded profile config "pause-892275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:40:28.656992  370050 kapi.go:59] client config for pause-892275: &rest.Config{Host:"https://192.168.61.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/client.crt", KeyFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/profiles/pause-892275/client.key", CAFile:"/home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 11:40:28.661031  370050 addons.go:502] enable addons completed in 5.049101ms: enabled=[]
	I1002 11:40:28.664158  370050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-892275" context rescaled to 1 replicas
	I1002 11:40:28.664197  370050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:40:28.665830  370050 out.go:177] * Verifying Kubernetes components...
	I1002 11:40:26.253023  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:26.253568  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:26.253594  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:26.253483  370326 retry.go:31] will retry after 1.071722283s: waiting for machine to come up
	I1002 11:40:27.326722  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:27.327265  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:27.327297  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:27.327208  370326 retry.go:31] will retry after 1.393629531s: waiting for machine to come up
	I1002 11:40:28.722921  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:28.723481  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:28.723513  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:28.723413  370326 retry.go:31] will retry after 1.735217347s: waiting for machine to come up
	I1002 11:40:30.460295  370303 main.go:141] libmachine: (kindnet-124285) DBG | domain kindnet-124285 has defined MAC address 52:54:00:8b:8b:15 in network mk-kindnet-124285
	I1002 11:40:30.460853  370303 main.go:141] libmachine: (kindnet-124285) DBG | unable to find current IP address of domain kindnet-124285 in network mk-kindnet-124285
	I1002 11:40:30.460888  370303 main.go:141] libmachine: (kindnet-124285) DBG | I1002 11:40:30.460773  370326 retry.go:31] will retry after 2.452036692s: waiting for machine to come up
	I1002 11:40:28.667094  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:40:28.769413  370050 node_ready.go:35] waiting up to 6m0s for node "pause-892275" to be "Ready" ...
	I1002 11:40:28.769457  370050 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:40:28.798318  370050 node_ready.go:49] node "pause-892275" has status "Ready":"True"
	I1002 11:40:28.798346  370050 node_ready.go:38] duration metric: took 28.893426ms waiting for node "pause-892275" to be "Ready" ...
	I1002 11:40:28.798383  370050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:29.000679  370050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.398702  370050 pod_ready.go:92] pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:29.398735  370050 pod_ready.go:81] duration metric: took 398.027503ms waiting for pod "coredns-5dd5756b68-4wp2m" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.398751  370050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.801306  370050 pod_ready.go:92] pod "etcd-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:29.801341  370050 pod_ready.go:81] duration metric: took 402.580896ms waiting for pod "etcd-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:29.801354  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.200054  370050 pod_ready.go:92] pod "kube-apiserver-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.200098  370050 pod_ready.go:81] duration metric: took 398.735302ms waiting for pod "kube-apiserver-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.200114  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.601041  370050 pod_ready.go:92] pod "kube-controller-manager-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.601069  370050 pod_ready.go:81] duration metric: took 400.946087ms waiting for pod "kube-controller-manager-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.601081  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:27.851417  369698 pod_ready.go:102] pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:30.351968  369698 pod_ready.go:102] pod "coredns-5dd5756b68-7wdkz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:40:30.998854  370050 pod_ready.go:92] pod "kube-proxy-h9rtm" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:30.998881  370050 pod_ready.go:81] duration metric: took 397.791788ms waiting for pod "kube-proxy-h9rtm" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:30.998894  370050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:31.401358  370050 pod_ready.go:92] pod "kube-scheduler-pause-892275" in "kube-system" namespace has status "Ready":"True"
	I1002 11:40:31.401383  370050 pod_ready.go:81] duration metric: took 402.481703ms waiting for pod "kube-scheduler-pause-892275" in "kube-system" namespace to be "Ready" ...
	I1002 11:40:31.401391  370050 pod_ready.go:38] duration metric: took 2.602996119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:40:31.401412  370050 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:40:31.401470  370050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:40:31.418527  370050 api_server.go:72] duration metric: took 2.754292051s to wait for apiserver process to appear ...
	I1002 11:40:31.418561  370050 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:40:31.418585  370050 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I1002 11:40:31.427123  370050 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I1002 11:40:31.428838  370050 api_server.go:141] control plane version: v1.28.2
	I1002 11:40:31.428863  370050 api_server.go:131] duration metric: took 10.292083ms to wait for apiserver health ...
	I1002 11:40:31.428874  370050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:40:31.601650  370050 system_pods.go:59] 6 kube-system pods found
	I1002 11:40:31.601692  370050 system_pods.go:61] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running
	I1002 11:40:31.601700  370050 system_pods.go:61] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running
	I1002 11:40:31.601709  370050 system_pods.go:61] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running
	I1002 11:40:31.601715  370050 system_pods.go:61] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running
	I1002 11:40:31.601722  370050 system_pods.go:61] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running
	I1002 11:40:31.601728  370050 system_pods.go:61] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running
	I1002 11:40:31.601737  370050 system_pods.go:74] duration metric: took 172.855801ms to wait for pod list to return data ...
	I1002 11:40:31.601747  370050 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:40:31.813739  370050 default_sa.go:45] found service account: "default"
	I1002 11:40:31.813837  370050 default_sa.go:55] duration metric: took 212.075355ms for default service account to be created ...
	I1002 11:40:31.813863  370050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:40:32.001829  370050 system_pods.go:86] 6 kube-system pods found
	I1002 11:40:32.001869  370050 system_pods.go:89] "coredns-5dd5756b68-4wp2m" [83150d98-2463-4dd5-ab60-18ea97aa0fbf] Running
	I1002 11:40:32.001878  370050 system_pods.go:89] "etcd-pause-892275" [b59f99ce-a6e5-44b7-88d5-65507ac1abd6] Running
	I1002 11:40:32.001885  370050 system_pods.go:89] "kube-apiserver-pause-892275" [91ef49df-2f4d-4799-b8b0-409c6bb79a94] Running
	I1002 11:40:32.001892  370050 system_pods.go:89] "kube-controller-manager-pause-892275" [607268c2-f567-4792-8063-fdf09bf0ee8e] Running
	I1002 11:40:32.001899  370050 system_pods.go:89] "kube-proxy-h9rtm" [82952868-5c0c-4b75-a974-3d22d51657f1] Running
	I1002 11:40:32.001904  370050 system_pods.go:89] "kube-scheduler-pause-892275" [202c923b-a98c-4fc8-aaf2-527dbda63e56] Running
	I1002 11:40:32.001915  370050 system_pods.go:126] duration metric: took 188.034831ms to wait for k8s-apps to be running ...
	I1002 11:40:32.001941  370050 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:40:32.002001  370050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:40:32.016812  370050 system_svc.go:56] duration metric: took 14.858534ms WaitForService to wait for kubelet.
	I1002 11:40:32.016846  370050 kubeadm.go:581] duration metric: took 3.352623583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:40:32.016872  370050 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:40:32.199137  370050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:40:32.199169  370050 node_conditions.go:123] node cpu capacity is 2
	I1002 11:40:32.199184  370050 node_conditions.go:105] duration metric: took 182.305653ms to run NodePressure ...
	I1002 11:40:32.199198  370050 start.go:228] waiting for startup goroutines ...
	I1002 11:40:32.199206  370050 start.go:233] waiting for cluster config update ...
	I1002 11:40:32.199214  370050 start.go:242] writing updated cluster config ...
	I1002 11:40:32.199586  370050 ssh_runner.go:195] Run: rm -f paused
	I1002 11:40:32.258910  370050 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:40:32.261752  370050 out.go:177] * Done! kubectl is now configured to use "pause-892275" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:38:12 UTC, ends at Mon 2023-10-02 11:40:35 UTC. --
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.128879850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246835128753206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=1cd70603-b5a6-45f5-adfb-3097d611a51e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.129734450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d1c3ad1-eda1-4028-aba4-123e8f1aeb93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.129887013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3d1c3ad1-eda1-4028-aba4-123e8f1aeb93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.130319430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3d1c3ad1-eda1-4028-aba4-123e8f1aeb93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.174939101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=12f59fe3-4edd-4550-b9c1-60d3f3b78084 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.175011224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=12f59fe3-4edd-4550-b9c1-60d3f3b78084 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.176376479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=be39a3ad-c8a1-45fc-ba60-f5532a57963a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.176719807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246835176708107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=be39a3ad-c8a1-45fc-ba60-f5532a57963a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.177422542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe6f83a2-04ea-4a4a-b562-7d75ac1bd6b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.177491510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe6f83a2-04ea-4a4a-b562-7d75ac1bd6b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.177732256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe6f83a2-04ea-4a4a-b562-7d75ac1bd6b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.223889527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cf696709-3d25-42cd-9cac-2a65a5810054 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.223977104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cf696709-3d25-42cd-9cac-2a65a5810054 name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.225185176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dac56173-422d-4984-befb-bbbc1ef31ec2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.225522191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246835225509496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=dac56173-422d-4984-befb-bbbc1ef31ec2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.226232546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=45ac9b0a-e5d6-4743-afa4-9d199c0a99e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.226299136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=45ac9b0a-e5d6-4743-afa4-9d199c0a99e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.226572544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=45ac9b0a-e5d6-4743-afa4-9d199c0a99e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.271601685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b630ba43-bca2-43e3-8de0-1e264d76c91c name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.271660279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b630ba43-bca2-43e3-8de0-1e264d76c91c name=/runtime.v1.RuntimeService/Version
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.273548209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7f45443c-531f-46dc-b8d5-8c6d44a42dc4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.274042045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696246835274025216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=7f45443c-531f-46dc-b8d5-8c6d44a42dc4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.274631166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0adc5436-c371-4b2e-8a0b-d34b1bf13917 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.274678452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0adc5436-c371-4b2e-8a0b-d34b1bf13917 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 11:40:35 pause-892275 crio[2601]: time="2023-10-02 11:40:35.275067461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f,PodSandboxId:a8e1583dec1a8344c6727980581c8d511fa2c8252086307ac699ff5d38bd50ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696246817803694710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696246817765727345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash: efb652b6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997,PodSandboxId:a1596a37cbb748e16336b494514b1e691999f6caf039d19eb14e424937086107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696246812287224653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e
9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32,PodSandboxId:a2553e3185728fd9816d1afb6ae1c4bd0d5ee985202b15be0e232e99217e626c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696246812319497756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68
bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825,PodSandboxId:ac0b5c67a83a62175e69cc8e5fab917306f959baa2609f8088eed04b382d6fef,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696246812227595175,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc7
6a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d,PodSandboxId:79a4c71f4945d7de2d071b0f4f4681283668cba87797d3c2711a20782330182e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696246812260502456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67bf5a0ee880,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b,PodSandboxId:e64f3caa15a5147a018f00af77784bc64b27a4804bbfdc57cdce6d6fda2ee319,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1696246800315900205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h9rtm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82952868-5c0c-4b75-a974-3d22d51657f1,},Annotations:map[string]string{io.kubernetes.container.hash:
efb652b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29,PodSandboxId:15b68d076f5e460d4666c4defab88fb26d2323006c9f302e5a79779b2f1e2ec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696246795568890726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4wp2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83150d98-2463-4dd5-ab60-18ea97aa0fbf,},Annotations:map[string]string{io.kubernetes.container.hash: ea935ff7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30,PodSandboxId:5d23fa1e437fa79063ea5d81e3fac650cc529132a0f7152c572b14c839c98b72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696246795294121084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cfe5fdbd752391d091f67
bf5a0ee880,},Annotations:map[string]string{io.kubernetes.container.hash: 71e046b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6,PodSandboxId:9d3f771c04ca97cba54b854719dfe832a8cd5ec85b9181669b4b7ca7f1b0e5e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,State:CONTAINER_EXITED,CreatedAt:1696246794730446299,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d94e6e9fb11e07718e5e3bf90e75114,},Annotations:map[string]string{io.kubernete
s.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893,PodSandboxId:6fad838b7d3eedc368d31c1317839cd0af030d5dc0ce7086af0f19de9c631e0f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696246794833339019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc76a95a27748f62d922d71c984e5e27,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6,PodSandboxId:5d657a476eeb8566fd64b820d502971fb1e4e24ca8a4900d7714bebf97b74eeb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696246794078692088,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-892275,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b61273fda713f0cdc7bb68bf6df3df0,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0adc5436-c371-4b2e-8a0b-d34b1bf13917 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e056231c4dab5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago      Running             coredns                   2                   a8e1583dec1a8       coredns-5dd5756b68-4wp2m
	97a36b6719cdf       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   17 seconds ago      Running             kube-proxy                2                   e64f3caa15a51       kube-proxy-h9rtm
	7ad66ee2a5f6d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   23 seconds ago      Running             kube-apiserver            2                   a2553e3185728       kube-apiserver-pause-892275
	47e1e4107ae31       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   23 seconds ago      Running             kube-scheduler            2                   a1596a37cbb74       kube-scheduler-pause-892275
	8b7f3576c7235       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago      Running             etcd                      2                   79a4c71f4945d       etcd-pause-892275
	e9f72cb018c25       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   23 seconds ago      Running             kube-controller-manager   2                   ac0b5c67a83a6       kube-controller-manager-pause-892275
	c5cafeee47dfc       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   35 seconds ago      Exited              kube-proxy                1                   e64f3caa15a51       kube-proxy-h9rtm
	0ca4784b61805       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   39 seconds ago      Exited              coredns                   1                   15b68d076f5e4       coredns-5dd5756b68-4wp2m
	a97af4b5192af       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   40 seconds ago      Exited              etcd                      1                   5d23fa1e437fa       etcd-pause-892275
	9e78210aab09d       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   40 seconds ago      Exited              kube-controller-manager   1                   6fad838b7d3ee       kube-controller-manager-pause-892275
	1947700b5aee5       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   40 seconds ago      Exited              kube-scheduler            1                   9d3f771c04ca9       kube-scheduler-pause-892275
	14dadcd321818       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   41 seconds ago      Exited              kube-apiserver            1                   5d657a476eeb8       kube-apiserver-pause-892275
	
	* 
	* ==> coredns [0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29] <==
	* 
	* 
	* ==> coredns [e056231c4dab5ce96e0f2ba6866351ea183a365a9098c932dead4cefb57dee3f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48079 - 52002 "HINFO IN 2874011237112364441.1389752061937950605. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012240445s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-892275
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-892275
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=pause-892275
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_38_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-892275
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 11:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:40:17 +0000   Mon, 02 Oct 2023 11:38:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-892275
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 057e80e18df541c1876eb3ad2541b9e8
	  System UUID:                057e80e1-8df5-41c1-876e-b3ad2541b9e8
	  Boot ID:                    f90c013b-1d34-4b3c-950f-d55377e21595
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-4wp2m                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     93s
	  kube-system                 etcd-pause-892275                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kube-apiserver-pause-892275             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-pause-892275    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-h9rtm                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-pause-892275             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                105s                 kubelet          Node pause-892275 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  104s                 kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s                 kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s                 kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s                  node-controller  Node pause-892275 event: Registered Node pause-892275 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-892275 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-892275 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-892275 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-892275 event: Registered Node pause-892275 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077209] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696530] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.951494] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146245] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.172759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.366171] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.154860] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.172728] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.135004] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.285055] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.986795] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[  +9.823650] systemd-fstab-generator[1257]: Ignoring "noauto" for root device
	[Oct 2 11:39] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.758882] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.242465] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.762195] systemd-fstab-generator[2329]: Ignoring "noauto" for root device
	[  +0.336104] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
	[  +0.515817] systemd-fstab-generator[2476]: Ignoring "noauto" for root device
	[Oct 2 11:40] systemd-fstab-generator[3217]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [8b7f3576c72351bd7b5608b92a734d6256b987a755e9fc85288f633cd931f24d] <==
	* {"level":"info","ts":"2023-10-02T11:40:13.794959Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:40:13.79497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T11:40:13.795184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2023-10-02T11:40:13.795228Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2023-10-02T11:40:13.795301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:13.795324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:40:13.798484Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T11:40:13.801119Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T11:40:13.801406Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2023-10-02T11:40:13.801443Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2023-10-02T11:40:13.801212Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T11:40:14.855939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2023-10-02T11:40:14.856092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.856114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.856132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.85615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2023-10-02T11:40:14.865312Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-892275 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:40:14.866915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:14.868545Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2023-10-02T11:40:14.868758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:40:14.878049Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:40:14.885846Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:40:14.885936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [a97af4b5192afce52a285513fe73088be2dbe2cd1069f9622ed24ec79a47dd30] <==
	* 
	* 
	* ==> kernel <==
	*  11:40:35 up 2 min,  0 users,  load average: 1.76, 0.71, 0.26
	Linux pause-892275 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [14dadcd321818f77fe74ea8528bbad283818e8005fa2ab6b37fce904fe6ee0a6] <==
	* I1002 11:39:55.687460       1 options.go:220] external host was not specified, using 192.168.61.203
	I1002 11:39:55.688918       1 server.go:148] Version: v1.28.2
	I1002 11:39:55.689011       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:39:56.758693       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I1002 11:39:56.768536       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1002 11:39:56.768650       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 11:39:56.768957       1 instance.go:298] Using reconciler: lease
	W1002 11:39:56.770685       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [7ad66ee2a5f6dd2972fc383c1232bcdb3d8228769c6d29812fff33d00982ab32] <==
	* I1002 11:40:17.196053       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1002 11:40:17.197057       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1002 11:40:17.197110       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1002 11:40:17.347021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 11:40:17.361565       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 11:40:17.378642       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 11:40:17.378758       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 11:40:17.378854       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 11:40:17.379378       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 11:40:17.380544       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 11:40:17.383243       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 11:40:17.397910       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 11:40:17.397940       1 aggregator.go:166] initial CRD sync complete...
	I1002 11:40:17.397952       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 11:40:17.397958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 11:40:17.397963       1 cache.go:39] Caches are synced for autoregister controller
	E1002 11:40:17.432283       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 11:40:18.194335       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 11:40:18.925585       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 11:40:18.941396       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 11:40:19.005969       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 11:40:19.045929       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:40:19.054884       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 11:40:30.204659       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 11:40:30.337866       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [9e78210aab09dcbbf4a1a0f6d5ea9d2d724740fbf655fb0013a85dc7b5816893] <==
	* 
	* 
	* ==> kube-controller-manager [e9f72cb018c252b02d527b0f1a41f42c0266c273bb327fe53aa78893401e8825] <==
	* I1002 11:40:30.290053       1 shared_informer.go:318] Caches are synced for GC
	I1002 11:40:30.299594       1 shared_informer.go:318] Caches are synced for node
	I1002 11:40:30.299834       1 range_allocator.go:174] "Sending events to api server"
	I1002 11:40:30.299875       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1002 11:40:30.299883       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1002 11:40:30.299892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1002 11:40:30.302302       1 shared_informer.go:318] Caches are synced for taint
	I1002 11:40:30.302445       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1002 11:40:30.302572       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1002 11:40:30.302602       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-892275"
	I1002 11:40:30.302706       1 taint_manager.go:211] "Sending events to api server"
	I1002 11:40:30.302709       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1002 11:40:30.303421       1 event.go:307] "Event occurred" object="pause-892275" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-892275 event: Registered Node pause-892275 in Controller"
	I1002 11:40:30.305968       1 shared_informer.go:318] Caches are synced for persistent volume
	I1002 11:40:30.312166       1 shared_informer.go:318] Caches are synced for TTL
	I1002 11:40:30.319578       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1002 11:40:30.324753       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 11:40:30.334878       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 11:40:30.337341       1 shared_informer.go:318] Caches are synced for attach detach
	I1002 11:40:30.373722       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1002 11:40:30.382234       1 shared_informer.go:318] Caches are synced for disruption
	I1002 11:40:30.395852       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 11:40:30.755422       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 11:40:30.801137       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 11:40:30.801190       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [97a36b6719cdfce33e6b158f5be54123a96751a9a8a91cae0481038b304f0da6] <==
	* I1002 11:40:18.061708       1 server_others.go:69] "Using iptables proxy"
	I1002 11:40:18.079402       1 node.go:141] Successfully retrieved node IP: 192.168.61.203
	I1002 11:40:18.120029       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:40:18.120082       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:40:18.123074       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:40:18.123520       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:40:18.124293       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:40:18.124342       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:40:18.127113       1 config.go:188] "Starting service config controller"
	I1002 11:40:18.127158       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:40:18.127197       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:40:18.127740       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:40:18.127492       1 config.go:315] "Starting node config controller"
	I1002 11:40:18.128292       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:40:18.228880       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:40:18.228967       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:40:18.229258       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b] <==
	* I1002 11:40:00.456845       1 server_others.go:69] "Using iptables proxy"
	E1002 11:40:00.460385       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:01.603311       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:03.902454       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	E1002 11:40:08.207910       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-892275": dial tcp 192.168.61.203:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6] <==
	* 
	* 
	* ==> kube-scheduler [47e1e4107ae31b34209ea1017506c5be20040ab231df5cac960bcc268e4b6997] <==
	* I1002 11:40:15.567514       1 serving.go:348] Generated self-signed cert in-memory
	W1002 11:40:17.320639       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 11:40:17.320738       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:40:17.320856       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 11:40:17.320898       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:40:17.357479       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 11:40:17.357535       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:40:17.359517       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 11:40:17.359711       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 11:40:17.359740       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:40:17.359753       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 11:40:17.460259       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:38:12 UTC, ends at Mon 2023-10-02 11:40:36 UTC. --
	Oct 02 11:40:12 pause-892275 kubelet[3223]: I1002 11:40:12.193924    3223 scope.go:117] "RemoveContainer" containerID="1947700b5aee5ffa5b0ddab104249e20baa7782c4fe1ec00982ad30d0662b3b6"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.456165    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.456225    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.565298    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-892275&limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.565380    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-892275&limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.586278    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.586355    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.845425    3223 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-892275?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="1.6s"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: W1002 11:40:12.922595    3223 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.922677    3223 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.203:8443: connect: connection refused
	Oct 02 11:40:12 pause-892275 kubelet[3223]: I1002 11:40:12.962167    3223 kubelet_node_status.go:70] "Attempting to register node" node="pause-892275"
	Oct 02 11:40:12 pause-892275 kubelet[3223]: E1002 11:40:12.962691    3223 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-892275"
	Oct 02 11:40:14 pause-892275 kubelet[3223]: I1002 11:40:14.564612    3223 kubelet_node_status.go:70] "Attempting to register node" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.417414    3223 apiserver.go:52] "Watching apiserver"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.425764    3223 topology_manager.go:215] "Topology Admit Handler" podUID="82952868-5c0c-4b75-a974-3d22d51657f1" podNamespace="kube-system" podName="kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.426047    3223 topology_manager.go:215] "Topology Admit Handler" podUID="83150d98-2463-4dd5-ab60-18ea97aa0fbf" podNamespace="kube-system" podName="coredns-5dd5756b68-4wp2m"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.427027    3223 kubelet_node_status.go:108] "Node was previously registered" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.427170    3223 kubelet_node_status.go:73] "Successfully registered node" node="pause-892275"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.440977    3223 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.445126    3223 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.446630    3223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.529670    3223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82952868-5c0c-4b75-a974-3d22d51657f1-xtables-lock\") pod \"kube-proxy-h9rtm\" (UID: \"82952868-5c0c-4b75-a974-3d22d51657f1\") " pod="kube-system/kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.530026    3223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82952868-5c0c-4b75-a974-3d22d51657f1-lib-modules\") pod \"kube-proxy-h9rtm\" (UID: \"82952868-5c0c-4b75-a974-3d22d51657f1\") " pod="kube-system/kube-proxy-h9rtm"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.727613    3223 scope.go:117] "RemoveContainer" containerID="0ca4784b618059d8b7840bbd5e90d613ac48d8bfb2d4672a3c8bb38cd3a20a29"
	Oct 02 11:40:17 pause-892275 kubelet[3223]: I1002 11:40:17.728194    3223 scope.go:117] "RemoveContainer" containerID="c5cafeee47dfcba3aeac4aaa3d2f1f8f613c7a2ca593b2d4bc5b4c508c79f52b"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-892275 -n pause-892275
helpers_test.go:261: (dbg) Run:  kubectl --context pause-892275 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (50.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-304121 --alsologtostderr -v=3
E1002 11:46:30.122785  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.128108  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.138449  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.158843  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.199231  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.279585  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:30.440062  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-304121 --alsologtostderr -v=3: exit status 82 (2m0.82230517s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-304121"  ...
	* Stopping node "no-preload-304121"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:46:29.698593  383394 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:46:29.698863  383394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:46:29.698874  383394 out.go:309] Setting ErrFile to fd 2...
	I1002 11:46:29.698882  383394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:46:29.699177  383394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:46:29.699573  383394 out.go:303] Setting JSON to false
	I1002 11:46:29.699751  383394 mustload.go:65] Loading cluster: no-preload-304121
	I1002 11:46:29.700191  383394 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:46:29.700282  383394 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:46:29.700471  383394 mustload.go:65] Loading cluster: no-preload-304121
	I1002 11:46:29.700628  383394 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:46:29.700665  383394 stop.go:39] StopHost: no-preload-304121
	I1002 11:46:29.701080  383394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:46:29.701147  383394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:46:29.717223  383394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I1002 11:46:29.717789  383394 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:46:29.718535  383394 main.go:141] libmachine: Using API Version  1
	I1002 11:46:29.718560  383394 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:46:29.719020  383394 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:46:29.721996  383394 out.go:177] * Stopping node "no-preload-304121"  ...
	I1002 11:46:29.723466  383394 main.go:141] libmachine: Stopping "no-preload-304121"...
	I1002 11:46:29.723491  383394 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:46:29.725643  383394 main.go:141] libmachine: (no-preload-304121) Calling .Stop
	I1002 11:46:29.729941  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 0/60
	I1002 11:46:30.732106  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 1/60
	I1002 11:46:31.733478  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 2/60
	I1002 11:46:32.735891  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 3/60
	I1002 11:46:33.737634  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 4/60
	I1002 11:46:34.740005  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 5/60
	I1002 11:46:35.741516  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 6/60
	I1002 11:46:36.742903  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 7/60
	I1002 11:46:37.744818  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 8/60
	I1002 11:46:38.746491  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 9/60
	I1002 11:46:39.748374  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 10/60
	I1002 11:46:40.749776  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 11/60
	I1002 11:46:41.751414  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 12/60
	I1002 11:46:42.752774  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 13/60
	I1002 11:46:43.754210  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 14/60
	I1002 11:46:44.756546  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 15/60
	I1002 11:46:45.757980  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 16/60
	I1002 11:46:46.759372  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 17/60
	I1002 11:46:47.760887  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 18/60
	I1002 11:46:48.762262  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 19/60
	I1002 11:46:49.764422  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 20/60
	I1002 11:46:50.766643  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 21/60
	I1002 11:46:51.768897  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 22/60
	I1002 11:46:52.770175  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 23/60
	I1002 11:46:53.771429  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 24/60
	I1002 11:46:54.772939  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 25/60
	I1002 11:46:55.774186  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 26/60
	I1002 11:46:56.775841  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 27/60
	I1002 11:46:57.778157  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 28/60
	I1002 11:46:58.779675  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 29/60
	I1002 11:46:59.781836  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 30/60
	I1002 11:47:00.783446  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 31/60
	I1002 11:47:01.784808  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 32/60
	I1002 11:47:02.786388  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 33/60
	I1002 11:47:03.787789  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 34/60
	I1002 11:47:04.790098  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 35/60
	I1002 11:47:05.791571  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 36/60
	I1002 11:47:06.793260  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 37/60
	I1002 11:47:07.794564  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 38/60
	I1002 11:47:08.796933  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 39/60
	I1002 11:47:09.799130  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 40/60
	I1002 11:47:10.800451  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 41/60
	I1002 11:47:11.802120  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 42/60
	I1002 11:47:12.803359  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 43/60
	I1002 11:47:13.804956  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 44/60
	I1002 11:47:14.806842  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 45/60
	I1002 11:47:15.808570  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 46/60
	I1002 11:47:16.809987  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 47/60
	I1002 11:47:17.811320  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 48/60
	I1002 11:47:18.812857  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 49/60
	I1002 11:47:19.814960  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 50/60
	I1002 11:47:20.816819  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 51/60
	I1002 11:47:21.818179  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 52/60
	I1002 11:47:22.819492  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 53/60
	I1002 11:47:23.820910  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 54/60
	I1002 11:47:24.823142  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 55/60
	I1002 11:47:25.824351  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 56/60
	I1002 11:47:26.825691  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 57/60
	I1002 11:47:27.826946  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 58/60
	I1002 11:47:28.828314  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 59/60
	I1002 11:47:29.829545  383394 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:47:29.829611  383394 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:47:29.829630  383394 retry.go:31] will retry after 509.686722ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:47:30.340328  383394 stop.go:39] StopHost: no-preload-304121
	I1002 11:47:30.340709  383394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:47:30.340757  383394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:47:30.356913  383394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36589
	I1002 11:47:30.357430  383394 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:47:30.357933  383394 main.go:141] libmachine: Using API Version  1
	I1002 11:47:30.357959  383394 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:47:30.358372  383394 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:47:30.360855  383394 out.go:177] * Stopping node "no-preload-304121"  ...
	I1002 11:47:30.362332  383394 main.go:141] libmachine: Stopping "no-preload-304121"...
	I1002 11:47:30.362385  383394 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:47:30.364285  383394 main.go:141] libmachine: (no-preload-304121) Calling .Stop
	I1002 11:47:30.368888  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 0/60
	I1002 11:47:31.371023  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 1/60
	I1002 11:47:32.372586  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 2/60
	I1002 11:47:33.373902  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 3/60
	I1002 11:47:34.375490  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 4/60
	I1002 11:47:35.377788  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 5/60
	I1002 11:47:36.379111  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 6/60
	I1002 11:47:37.380609  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 7/60
	I1002 11:47:38.382186  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 8/60
	I1002 11:47:39.383783  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 9/60
	I1002 11:47:40.386045  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 10/60
	I1002 11:47:41.387651  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 11/60
	I1002 11:47:42.389085  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 12/60
	I1002 11:47:43.390438  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 13/60
	I1002 11:47:44.391839  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 14/60
	I1002 11:47:45.393662  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 15/60
	I1002 11:47:46.395349  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 16/60
	I1002 11:47:47.397026  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 17/60
	I1002 11:47:48.398674  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 18/60
	I1002 11:47:49.400189  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 19/60
	I1002 11:47:50.402012  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 20/60
	I1002 11:47:51.403531  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 21/60
	I1002 11:47:52.405089  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 22/60
	I1002 11:47:53.406635  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 23/60
	I1002 11:47:54.408050  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 24/60
	I1002 11:47:55.409377  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 25/60
	I1002 11:47:56.411132  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 26/60
	I1002 11:47:57.412975  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 27/60
	I1002 11:47:58.414501  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 28/60
	I1002 11:47:59.416035  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 29/60
	I1002 11:48:00.417342  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 30/60
	I1002 11:48:01.418917  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 31/60
	I1002 11:48:02.420185  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 32/60
	I1002 11:48:03.421423  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 33/60
	I1002 11:48:04.422794  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 34/60
	I1002 11:48:05.424092  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 35/60
	I1002 11:48:06.425734  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 36/60
	I1002 11:48:07.427129  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 37/60
	I1002 11:48:08.428558  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 38/60
	I1002 11:48:09.429947  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 39/60
	I1002 11:48:10.431674  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 40/60
	I1002 11:48:11.433153  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 41/60
	I1002 11:48:12.434457  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 42/60
	I1002 11:48:13.436056  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 43/60
	I1002 11:48:14.437468  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 44/60
	I1002 11:48:15.438920  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 45/60
	I1002 11:48:16.440845  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 46/60
	I1002 11:48:17.442182  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 47/60
	I1002 11:48:18.443598  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 48/60
	I1002 11:48:19.445322  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 49/60
	I1002 11:48:20.447213  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 50/60
	I1002 11:48:21.448802  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 51/60
	I1002 11:48:22.450294  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 52/60
	I1002 11:48:23.451795  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 53/60
	I1002 11:48:24.453219  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 54/60
	I1002 11:48:25.454840  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 55/60
	I1002 11:48:26.456279  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 56/60
	I1002 11:48:27.457806  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 57/60
	I1002 11:48:28.459210  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 58/60
	I1002 11:48:29.460758  383394 main.go:141] libmachine: (no-preload-304121) Waiting for machine to stop 59/60
	I1002 11:48:30.461451  383394 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:48:30.461502  383394 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:30.463355  383394 out.go:177] 
	W1002 11:48:30.464895  383394 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 11:48:30.464918  383394 out.go:239] * 
	* 
	W1002 11:48:30.467870  383394 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:48:30.469437  383394 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-304121 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
E1002 11:48:30.518514  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.523783  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.534118  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.554454  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.595323  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.675699  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:30.836151  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:31.156749  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:31.797425  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:33.077955  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:35.638659  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:48:38.684851  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:48:40.759814  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121: exit status 3 (18.447882901s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:48:48.918779  384126 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host
	E1002 11:48:48.918809  384126 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-304121" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-749860 --alsologtostderr -v=3
E1002 11:46:50.603324  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:55.305817  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-749860 --alsologtostderr -v=3: exit status 82 (2m1.060167892s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-749860"  ...
	* Stopping node "old-k8s-version-749860"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:46:42.070867  383507 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:46:42.071125  383507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:46:42.071135  383507 out.go:309] Setting ErrFile to fd 2...
	I1002 11:46:42.071142  383507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:46:42.071319  383507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:46:42.071581  383507 out.go:303] Setting JSON to false
	I1002 11:46:42.071687  383507 mustload.go:65] Loading cluster: old-k8s-version-749860
	I1002 11:46:42.072007  383507 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:46:42.072090  383507 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:46:42.072265  383507 mustload.go:65] Loading cluster: old-k8s-version-749860
	I1002 11:46:42.072394  383507 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:46:42.072452  383507 stop.go:39] StopHost: old-k8s-version-749860
	I1002 11:46:42.072837  383507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:46:42.072905  383507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:46:42.088460  383507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I1002 11:46:42.088942  383507 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:46:42.089556  383507 main.go:141] libmachine: Using API Version  1
	I1002 11:46:42.089581  383507 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:46:42.089923  383507 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:46:42.092358  383507 out.go:177] * Stopping node "old-k8s-version-749860"  ...
	I1002 11:46:42.093952  383507 main.go:141] libmachine: Stopping "old-k8s-version-749860"...
	I1002 11:46:42.093975  383507 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:46:42.095879  383507 main.go:141] libmachine: (old-k8s-version-749860) Calling .Stop
	I1002 11:46:42.099728  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 0/60
	I1002 11:46:43.101128  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 1/60
	I1002 11:46:44.102646  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 2/60
	I1002 11:46:45.104944  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 3/60
	I1002 11:46:46.106599  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 4/60
	I1002 11:46:47.108812  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 5/60
	I1002 11:46:48.110039  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 6/60
	I1002 11:46:49.111435  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 7/60
	I1002 11:46:50.112947  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 8/60
	I1002 11:46:51.114695  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 9/60
	I1002 11:46:52.116340  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 10/60
	I1002 11:46:53.118362  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 11/60
	I1002 11:46:54.120232  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 12/60
	I1002 11:46:55.121642  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 13/60
	I1002 11:46:56.123213  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 14/60
	I1002 11:46:57.125431  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 15/60
	I1002 11:46:58.126939  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 16/60
	I1002 11:46:59.128411  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 17/60
	I1002 11:47:00.129877  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 18/60
	I1002 11:47:01.131444  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 19/60
	I1002 11:47:02.133490  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 20/60
	I1002 11:47:03.135111  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 21/60
	I1002 11:47:04.137221  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 22/60
	I1002 11:47:05.138743  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 23/60
	I1002 11:47:06.140166  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 24/60
	I1002 11:47:07.141904  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 25/60
	I1002 11:47:08.143932  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 26/60
	I1002 11:47:09.145143  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 27/60
	I1002 11:47:10.146500  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 28/60
	I1002 11:47:11.147940  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 29/60
	I1002 11:47:12.150318  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 30/60
	I1002 11:47:13.152035  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 31/60
	I1002 11:47:14.153880  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 32/60
	I1002 11:47:15.155540  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 33/60
	I1002 11:47:16.157010  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 34/60
	I1002 11:47:17.158654  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 35/60
	I1002 11:47:18.159925  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 36/60
	I1002 11:47:19.161220  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 37/60
	I1002 11:47:20.162848  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 38/60
	I1002 11:47:21.164975  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 39/60
	I1002 11:47:22.166867  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 40/60
	I1002 11:47:23.168427  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 41/60
	I1002 11:47:24.169837  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 42/60
	I1002 11:47:25.171348  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 43/60
	I1002 11:47:26.172777  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 44/60
	I1002 11:47:27.174759  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 45/60
	I1002 11:47:28.175898  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 46/60
	I1002 11:47:29.177379  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 47/60
	I1002 11:47:30.178847  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 48/60
	I1002 11:47:31.180342  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 49/60
	I1002 11:47:32.182516  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 50/60
	I1002 11:47:33.184153  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 51/60
	I1002 11:47:34.185927  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 52/60
	I1002 11:47:35.187754  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 53/60
	I1002 11:47:36.189183  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 54/60
	I1002 11:47:37.191596  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 55/60
	I1002 11:47:38.192888  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 56/60
	I1002 11:47:39.194310  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 57/60
	I1002 11:47:40.196285  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 58/60
	I1002 11:47:41.197632  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 59/60
	I1002 11:47:42.199145  383507 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:47:42.199208  383507 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:47:42.199234  383507 retry.go:31] will retry after 762.3032ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:47:42.962127  383507 stop.go:39] StopHost: old-k8s-version-749860
	I1002 11:47:42.962540  383507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:47:42.962591  383507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:47:42.977750  383507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I1002 11:47:42.978234  383507 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:47:42.978723  383507 main.go:141] libmachine: Using API Version  1
	I1002 11:47:42.978743  383507 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:47:42.979155  383507 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:47:42.981319  383507 out.go:177] * Stopping node "old-k8s-version-749860"  ...
	I1002 11:47:42.982984  383507 main.go:141] libmachine: Stopping "old-k8s-version-749860"...
	I1002 11:47:42.983001  383507 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:47:42.984883  383507 main.go:141] libmachine: (old-k8s-version-749860) Calling .Stop
	I1002 11:47:42.988465  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 0/60
	I1002 11:47:43.990166  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 1/60
	I1002 11:47:44.991991  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 2/60
	I1002 11:47:45.993478  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 3/60
	I1002 11:47:46.995117  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 4/60
	I1002 11:47:47.997193  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 5/60
	I1002 11:47:48.998726  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 6/60
	I1002 11:47:50.000429  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 7/60
	I1002 11:47:51.001966  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 8/60
	I1002 11:47:52.003572  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 9/60
	I1002 11:47:53.005493  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 10/60
	I1002 11:47:54.006968  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 11/60
	I1002 11:47:55.008535  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 12/60
	I1002 11:47:56.010003  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 13/60
	I1002 11:47:57.011423  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 14/60
	I1002 11:47:58.013221  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 15/60
	I1002 11:47:59.014656  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 16/60
	I1002 11:48:00.016267  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 17/60
	I1002 11:48:01.018536  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 18/60
	I1002 11:48:02.020903  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 19/60
	I1002 11:48:03.022902  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 20/60
	I1002 11:48:04.024435  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 21/60
	I1002 11:48:05.025876  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 22/60
	I1002 11:48:06.027355  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 23/60
	I1002 11:48:07.028970  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 24/60
	I1002 11:48:08.030677  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 25/60
	I1002 11:48:09.032503  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 26/60
	I1002 11:48:10.034048  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 27/60
	I1002 11:48:11.035630  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 28/60
	I1002 11:48:12.037156  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 29/60
	I1002 11:48:13.039172  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 30/60
	I1002 11:48:14.040557  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 31/60
	I1002 11:48:15.042037  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 32/60
	I1002 11:48:16.043604  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 33/60
	I1002 11:48:17.044817  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 34/60
	I1002 11:48:18.046034  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 35/60
	I1002 11:48:19.047366  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 36/60
	I1002 11:48:20.048840  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 37/60
	I1002 11:48:21.050193  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 38/60
	I1002 11:48:22.051580  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 39/60
	I1002 11:48:23.053444  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 40/60
	I1002 11:48:24.054929  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 41/60
	I1002 11:48:25.056302  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 42/60
	I1002 11:48:26.058079  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 43/60
	I1002 11:48:27.059663  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 44/60
	I1002 11:48:28.061037  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 45/60
	I1002 11:48:29.062512  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 46/60
	I1002 11:48:30.063908  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 47/60
	I1002 11:48:31.065395  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 48/60
	I1002 11:48:32.066895  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 49/60
	I1002 11:48:33.068579  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 50/60
	I1002 11:48:34.069885  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 51/60
	I1002 11:48:35.071275  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 52/60
	I1002 11:48:36.072573  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 53/60
	I1002 11:48:37.074069  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 54/60
	I1002 11:48:38.076605  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 55/60
	I1002 11:48:39.077855  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 56/60
	I1002 11:48:40.079332  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 57/60
	I1002 11:48:41.080670  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 58/60
	I1002 11:48:42.081970  383507 main.go:141] libmachine: (old-k8s-version-749860) Waiting for machine to stop 59/60
	I1002 11:48:43.082445  383507 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:48:43.082513  383507 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:43.084627  383507 out.go:177] 
	W1002 11:48:43.086129  383507 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 11:48:43.086152  383507 out.go:239] * 
	* 
	W1002 11:48:43.089113  383507 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:48:43.090545  383507 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-749860 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
E1002 11:48:43.925271  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860: exit status 3 (18.626425724s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:01.718667  384195 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host
	E1002 11:49:01.718690  384195 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-749860" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-487027 --alsologtostderr -v=3
E1002 11:47:16.764067  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:47:22.001612  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.006917  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.017234  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.037613  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.077980  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.158740  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.319281  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:22.639861  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:23.280861  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:24.561641  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:27.121815  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-487027 --alsologtostderr -v=3: exit status 82 (2m1.663712376s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-487027"  ...
	* Stopping node "embed-certs-487027"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:47:15.048982  383766 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:47:15.049331  383766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:47:15.049345  383766 out.go:309] Setting ErrFile to fd 2...
	I1002 11:47:15.049351  383766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:47:15.049559  383766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:47:15.049798  383766 out.go:303] Setting JSON to false
	I1002 11:47:15.049880  383766 mustload.go:65] Loading cluster: embed-certs-487027
	I1002 11:47:15.050216  383766 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:47:15.050279  383766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:47:15.050473  383766 mustload.go:65] Loading cluster: embed-certs-487027
	I1002 11:47:15.050588  383766 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:47:15.050617  383766 stop.go:39] StopHost: embed-certs-487027
	I1002 11:47:15.050984  383766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:47:15.051040  383766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:47:15.065857  383766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I1002 11:47:15.066385  383766 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:47:15.067052  383766 main.go:141] libmachine: Using API Version  1
	I1002 11:47:15.067093  383766 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:47:15.067523  383766 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:47:15.070225  383766 out.go:177] * Stopping node "embed-certs-487027"  ...
	I1002 11:47:15.071829  383766 main.go:141] libmachine: Stopping "embed-certs-487027"...
	I1002 11:47:15.071853  383766 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:47:15.073411  383766 main.go:141] libmachine: (embed-certs-487027) Calling .Stop
	I1002 11:47:15.076958  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 0/60
	I1002 11:47:16.078505  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 1/60
	I1002 11:47:17.080084  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 2/60
	I1002 11:47:18.082454  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 3/60
	I1002 11:47:19.084075  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 4/60
	I1002 11:47:20.085928  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 5/60
	I1002 11:47:21.087405  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 6/60
	I1002 11:47:22.089382  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 7/60
	I1002 11:47:23.090879  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 8/60
	I1002 11:47:24.092984  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 9/60
	I1002 11:47:25.095157  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 10/60
	I1002 11:47:26.097035  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 11/60
	I1002 11:47:27.098526  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 12/60
	I1002 11:47:28.099810  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 13/60
	I1002 11:47:29.101205  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 14/60
	I1002 11:47:30.103138  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 15/60
	I1002 11:47:31.104913  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 16/60
	I1002 11:47:32.106136  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 17/60
	I1002 11:47:33.107549  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 18/60
	I1002 11:47:34.109818  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 19/60
	I1002 11:47:35.112265  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 20/60
	I1002 11:47:36.113643  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 21/60
	I1002 11:47:37.115428  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 22/60
	I1002 11:47:38.117019  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 23/60
	I1002 11:47:39.118653  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 24/60
	I1002 11:47:40.120540  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 25/60
	I1002 11:47:41.122161  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 26/60
	I1002 11:47:42.124135  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 27/60
	I1002 11:47:43.125791  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 28/60
	I1002 11:47:44.128081  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 29/60
	I1002 11:47:45.130236  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 30/60
	I1002 11:47:46.131718  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 31/60
	I1002 11:47:47.133244  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 32/60
	I1002 11:47:48.134670  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 33/60
	I1002 11:47:49.136097  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 34/60
	I1002 11:47:50.138102  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 35/60
	I1002 11:47:51.139551  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 36/60
	I1002 11:47:52.140888  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 37/60
	I1002 11:47:53.142276  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 38/60
	I1002 11:47:54.143702  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 39/60
	I1002 11:47:55.145972  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 40/60
	I1002 11:47:56.147316  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 41/60
	I1002 11:47:57.148784  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 42/60
	I1002 11:47:58.150304  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 43/60
	I1002 11:47:59.151849  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 44/60
	I1002 11:48:00.154022  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 45/60
	I1002 11:48:01.155428  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 46/60
	I1002 11:48:02.156921  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 47/60
	I1002 11:48:03.158263  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 48/60
	I1002 11:48:04.159453  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 49/60
	I1002 11:48:05.161571  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 50/60
	I1002 11:48:06.163197  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 51/60
	I1002 11:48:07.165504  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 52/60
	I1002 11:48:08.167130  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 53/60
	I1002 11:48:09.168888  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 54/60
	I1002 11:48:10.171136  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 55/60
	I1002 11:48:11.172391  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 56/60
	I1002 11:48:12.173732  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 57/60
	I1002 11:48:13.175265  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 58/60
	I1002 11:48:14.176832  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 59/60
	I1002 11:48:15.178155  383766 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:48:15.178223  383766 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:15.178249  383766 retry.go:31] will retry after 1.367710139s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:16.546776  383766 stop.go:39] StopHost: embed-certs-487027
	I1002 11:48:16.547230  383766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:48:16.547289  383766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:48:16.562287  383766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I1002 11:48:16.562840  383766 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:48:16.563362  383766 main.go:141] libmachine: Using API Version  1
	I1002 11:48:16.563389  383766 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:48:16.563730  383766 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:48:16.565919  383766 out.go:177] * Stopping node "embed-certs-487027"  ...
	I1002 11:48:16.567455  383766 main.go:141] libmachine: Stopping "embed-certs-487027"...
	I1002 11:48:16.567469  383766 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:48:16.569098  383766 main.go:141] libmachine: (embed-certs-487027) Calling .Stop
	I1002 11:48:16.572213  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 0/60
	I1002 11:48:17.573872  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 1/60
	I1002 11:48:18.575071  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 2/60
	I1002 11:48:19.576553  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 3/60
	I1002 11:48:20.577815  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 4/60
	I1002 11:48:21.579726  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 5/60
	I1002 11:48:22.581311  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 6/60
	I1002 11:48:23.582711  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 7/60
	I1002 11:48:24.584232  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 8/60
	I1002 11:48:25.585685  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 9/60
	I1002 11:48:26.587720  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 10/60
	I1002 11:48:27.589144  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 11/60
	I1002 11:48:28.590653  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 12/60
	I1002 11:48:29.592095  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 13/60
	I1002 11:48:30.593046  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 14/60
	I1002 11:48:31.594977  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 15/60
	I1002 11:48:32.596947  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 16/60
	I1002 11:48:33.598315  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 17/60
	I1002 11:48:34.599893  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 18/60
	I1002 11:48:35.601287  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 19/60
	I1002 11:48:36.603543  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 20/60
	I1002 11:48:37.604965  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 21/60
	I1002 11:48:38.606501  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 22/60
	I1002 11:48:39.607999  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 23/60
	I1002 11:48:40.609400  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 24/60
	I1002 11:48:41.611427  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 25/60
	I1002 11:48:42.613083  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 26/60
	I1002 11:48:43.614539  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 27/60
	I1002 11:48:44.615896  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 28/60
	I1002 11:48:45.617389  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 29/60
	I1002 11:48:46.619099  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 30/60
	I1002 11:48:47.620819  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 31/60
	I1002 11:48:48.622240  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 32/60
	I1002 11:48:49.623649  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 33/60
	I1002 11:48:50.625321  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 34/60
	I1002 11:48:51.627178  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 35/60
	I1002 11:48:52.628698  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 36/60
	I1002 11:48:53.630173  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 37/60
	I1002 11:48:54.631827  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 38/60
	I1002 11:48:55.633190  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 39/60
	I1002 11:48:56.635195  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 40/60
	I1002 11:48:57.636747  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 41/60
	I1002 11:48:58.638189  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 42/60
	I1002 11:48:59.639964  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 43/60
	I1002 11:49:00.641431  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 44/60
	I1002 11:49:01.643236  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 45/60
	I1002 11:49:02.644709  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 46/60
	I1002 11:49:03.646218  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 47/60
	I1002 11:49:04.647565  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 48/60
	I1002 11:49:05.648877  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 49/60
	I1002 11:49:06.650772  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 50/60
	I1002 11:49:07.652159  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 51/60
	I1002 11:49:08.653680  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 52/60
	I1002 11:49:09.655057  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 53/60
	I1002 11:49:10.656802  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 54/60
	I1002 11:49:11.658917  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 55/60
	I1002 11:49:12.660615  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 56/60
	I1002 11:49:13.662102  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 57/60
	I1002 11:49:14.663568  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 58/60
	I1002 11:49:15.664999  383766 main.go:141] libmachine: (embed-certs-487027) Waiting for machine to stop 59/60
	I1002 11:49:16.666036  383766 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:49:16.666092  383766 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:49:16.668291  383766 out.go:177] 
	W1002 11:49:16.669792  383766 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 11:49:16.669809  383766 out.go:239] * 
	* 
	W1002 11:49:16.672683  383766 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:49:16.674292  383766 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-487027 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
E1002 11:49:17.877174  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:20.438222  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:25.559124  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:26.888405  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:26.893699  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:26.903947  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:26.924249  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:26.964596  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:27.044827  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:27.205278  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:27.525939  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:28.166841  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:29.447402  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:32.008589  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:49:34.099531  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.104780  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.115023  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.135296  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.175572  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.255930  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.416386  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:34.737306  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027: exit status 3 (18.575503856s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:35.254736  384539 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host
	E1002 11:49:35.254756  384539 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-487027" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-777999 --alsologtostderr -v=3
E1002 11:47:42.483445  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:47:52.044929  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:48:02.964194  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-777999 --alsologtostderr -v=3: exit status 82 (2m1.299789909s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-777999"  ...
	* Stopping node "default-k8s-diff-port-777999"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:47:41.759736  383921 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:47:41.760018  383921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:47:41.760030  383921 out.go:309] Setting ErrFile to fd 2...
	I1002 11:47:41.760034  383921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:47:41.760273  383921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:47:41.760563  383921 out.go:303] Setting JSON to false
	I1002 11:47:41.760664  383921 mustload.go:65] Loading cluster: default-k8s-diff-port-777999
	I1002 11:47:41.761023  383921 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:47:41.761113  383921 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:47:41.761300  383921 mustload.go:65] Loading cluster: default-k8s-diff-port-777999
	I1002 11:47:41.761448  383921 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:47:41.761494  383921 stop.go:39] StopHost: default-k8s-diff-port-777999
	I1002 11:47:41.761892  383921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:47:41.761956  383921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:47:41.776845  383921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I1002 11:47:41.777297  383921 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:47:41.777862  383921 main.go:141] libmachine: Using API Version  1
	I1002 11:47:41.777896  383921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:47:41.778305  383921 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:47:41.781138  383921 out.go:177] * Stopping node "default-k8s-diff-port-777999"  ...
	I1002 11:47:41.782687  383921 main.go:141] libmachine: Stopping "default-k8s-diff-port-777999"...
	I1002 11:47:41.782716  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:47:41.784438  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Stop
	I1002 11:47:41.788355  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 0/60
	I1002 11:47:42.789726  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 1/60
	I1002 11:47:43.791297  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 2/60
	I1002 11:47:44.793043  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 3/60
	I1002 11:47:45.794469  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 4/60
	I1002 11:47:46.797046  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 5/60
	I1002 11:47:47.798556  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 6/60
	I1002 11:47:48.800046  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 7/60
	I1002 11:47:49.801435  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 8/60
	I1002 11:47:50.803056  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 9/60
	I1002 11:47:51.804361  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 10/60
	I1002 11:47:52.805966  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 11/60
	I1002 11:47:53.807392  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 12/60
	I1002 11:47:54.808928  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 13/60
	I1002 11:47:55.810479  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 14/60
	I1002 11:47:56.812703  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 15/60
	I1002 11:47:57.814168  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 16/60
	I1002 11:47:58.815923  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 17/60
	I1002 11:47:59.817455  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 18/60
	I1002 11:48:00.819006  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 19/60
	I1002 11:48:01.821302  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 20/60
	I1002 11:48:02.822684  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 21/60
	I1002 11:48:03.824434  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 22/60
	I1002 11:48:04.825983  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 23/60
	I1002 11:48:05.827651  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 24/60
	I1002 11:48:06.829693  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 25/60
	I1002 11:48:07.831201  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 26/60
	I1002 11:48:08.832856  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 27/60
	I1002 11:48:09.834267  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 28/60
	I1002 11:48:10.835781  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 29/60
	I1002 11:48:11.837622  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 30/60
	I1002 11:48:12.839321  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 31/60
	I1002 11:48:13.840913  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 32/60
	I1002 11:48:14.842420  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 33/60
	I1002 11:48:15.843902  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 34/60
	I1002 11:48:16.845922  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 35/60
	I1002 11:48:17.847449  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 36/60
	I1002 11:48:18.848799  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 37/60
	I1002 11:48:19.850528  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 38/60
	I1002 11:48:20.851949  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 39/60
	I1002 11:48:21.854420  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 40/60
	I1002 11:48:22.855836  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 41/60
	I1002 11:48:23.857314  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 42/60
	I1002 11:48:24.858842  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 43/60
	I1002 11:48:25.860461  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 44/60
	I1002 11:48:26.862794  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 45/60
	I1002 11:48:27.864198  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 46/60
	I1002 11:48:28.865714  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 47/60
	I1002 11:48:29.867217  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 48/60
	I1002 11:48:30.868745  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 49/60
	I1002 11:48:31.871197  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 50/60
	I1002 11:48:32.872887  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 51/60
	I1002 11:48:33.874796  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 52/60
	I1002 11:48:34.876097  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 53/60
	I1002 11:48:35.877631  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 54/60
	I1002 11:48:36.879856  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 55/60
	I1002 11:48:37.881564  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 56/60
	I1002 11:48:38.883105  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 57/60
	I1002 11:48:39.884702  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 58/60
	I1002 11:48:40.886081  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 59/60
	I1002 11:48:41.887452  383921 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:48:41.887530  383921 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:41.887552  383921 retry.go:31] will retry after 1.003145542s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:48:42.891716  383921 stop.go:39] StopHost: default-k8s-diff-port-777999
	I1002 11:48:42.892153  383921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:48:42.892206  383921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:48:42.907408  383921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I1002 11:48:42.907917  383921 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:48:42.908380  383921 main.go:141] libmachine: Using API Version  1
	I1002 11:48:42.908409  383921 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:48:42.908751  383921 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:48:42.910964  383921 out.go:177] * Stopping node "default-k8s-diff-port-777999"  ...
	I1002 11:48:42.912260  383921 main.go:141] libmachine: Stopping "default-k8s-diff-port-777999"...
	I1002 11:48:42.912275  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:48:42.913856  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Stop
	I1002 11:48:42.917527  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 0/60
	I1002 11:48:43.918989  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 1/60
	I1002 11:48:44.921024  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 2/60
	I1002 11:48:45.922230  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 3/60
	I1002 11:48:46.923593  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 4/60
	I1002 11:48:47.925268  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 5/60
	I1002 11:48:48.926972  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 6/60
	I1002 11:48:49.928270  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 7/60
	I1002 11:48:50.929725  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 8/60
	I1002 11:48:51.931284  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 9/60
	I1002 11:48:52.933190  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 10/60
	I1002 11:48:53.934669  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 11/60
	I1002 11:48:54.936138  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 12/60
	I1002 11:48:55.937506  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 13/60
	I1002 11:48:56.938963  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 14/60
	I1002 11:48:57.940590  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 15/60
	I1002 11:48:58.942153  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 16/60
	I1002 11:48:59.943684  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 17/60
	I1002 11:49:00.945143  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 18/60
	I1002 11:49:01.946462  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 19/60
	I1002 11:49:02.948223  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 20/60
	I1002 11:49:03.949602  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 21/60
	I1002 11:49:04.950597  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 22/60
	I1002 11:49:05.952108  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 23/60
	I1002 11:49:06.953761  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 24/60
	I1002 11:49:07.955367  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 25/60
	I1002 11:49:08.956956  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 26/60
	I1002 11:49:09.958508  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 27/60
	I1002 11:49:10.960171  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 28/60
	I1002 11:49:11.961515  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 29/60
	I1002 11:49:12.963332  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 30/60
	I1002 11:49:13.964717  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 31/60
	I1002 11:49:14.966215  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 32/60
	I1002 11:49:15.967490  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 33/60
	I1002 11:49:16.969034  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 34/60
	I1002 11:49:17.971095  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 35/60
	I1002 11:49:18.972651  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 36/60
	I1002 11:49:19.974145  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 37/60
	I1002 11:49:20.975436  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 38/60
	I1002 11:49:21.977099  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 39/60
	I1002 11:49:22.978905  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 40/60
	I1002 11:49:23.980248  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 41/60
	I1002 11:49:24.981851  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 42/60
	I1002 11:49:25.983197  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 43/60
	I1002 11:49:26.984540  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 44/60
	I1002 11:49:27.986327  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 45/60
	I1002 11:49:28.987676  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 46/60
	I1002 11:49:29.988976  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 47/60
	I1002 11:49:30.990672  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 48/60
	I1002 11:49:31.992036  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 49/60
	I1002 11:49:32.994532  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 50/60
	I1002 11:49:33.996092  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 51/60
	I1002 11:49:34.997681  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 52/60
	I1002 11:49:35.999375  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 53/60
	I1002 11:49:37.000749  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 54/60
	I1002 11:49:38.002455  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 55/60
	I1002 11:49:39.003963  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 56/60
	I1002 11:49:40.005416  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 57/60
	I1002 11:49:41.006719  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 58/60
	I1002 11:49:42.008161  383921 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for machine to stop 59/60
	I1002 11:49:43.009040  383921 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1002 11:49:43.009108  383921 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 11:49:43.011282  383921 out.go:177] 
	W1002 11:49:43.012760  383921 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 11:49:43.012778  383921 out.go:239] * 
	* 
	W1002 11:49:43.015727  383921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 11:49:43.017142  383921 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-777999 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
E1002 11:49:44.339065  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999: exit status 3 (18.603603181s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:50:01.622705  384704 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host
	E1002 11:50:01.622729  384704 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-777999" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
E1002 11:48:51.000363  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121: exit status 3 (3.167900895s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:48:52.086729  384244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host
	E1002 11:48:52.086753  384244 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-304121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-304121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153324761s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-304121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121: exit status 3 (3.062223731s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:01.302793  384314 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host
	E1002 11:49:01.302812  384314 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.143:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-304121" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
E1002 11:49:04.535176  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860: exit status 3 (3.168006363s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:04.886751  384385 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host
	E1002 11:49:04.886776  384385 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-749860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-749860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153238898s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-749860 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
E1002 11:49:11.480507  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:49:13.965907  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860: exit status 3 (3.062131948s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:14.102812  384458 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host
	E1002 11:49:14.102838  384458 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-749860" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
E1002 11:49:35.377920  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:35.799388  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:36.658296  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:37.129233  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027: exit status 3 (3.167785364s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:38.422735  384634 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host
	E1002 11:49:38.422765  384634 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-487027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1002 11:49:39.218458  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-487027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153586176s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-487027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
E1002 11:49:47.370417  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027: exit status 3 (3.06184495s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:49:47.638786  384757 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host
	E1002 11:49:47.638805  384757 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.147:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-487027" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999: exit status 3 (3.167926086s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:50:04.790715  384854 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host
	E1002 11:50:04.790734  384854 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-777999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1002 11:50:05.846516  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:50:07.850820  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-777999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153912999s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-777999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999: exit status 3 (3.062327792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 11:50:14.006840  384923 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host
	E1002 11:50:14.006860  384923 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-777999" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:08:34.63456106 +0000 UTC m=+5565.206247114
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-777999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-777999 logs -n 25: (1.744873723s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo cat                              | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:14.045882  384965 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:14.045995  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046005  384965 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:14.046009  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046207  384965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:50:14.046807  384965 out.go:303] Setting JSON to false
	I1002 11:50:14.047867  384965 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9160,"bootTime":1696238254,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:50:14.047937  384965 start.go:138] virtualization: kvm guest
	I1002 11:50:14.050148  384965 out.go:177] * [default-k8s-diff-port-777999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:50:14.051736  384965 notify.go:220] Checking for updates...
	I1002 11:50:14.051738  384965 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:14.053419  384965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:14.055001  384965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:50:14.056531  384965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:50:14.057828  384965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:50:14.059154  384965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:14.060884  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:50:14.061318  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.061365  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.077285  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1002 11:50:14.077670  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.078164  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.078184  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.078590  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.078766  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.079011  384965 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:14.079285  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.079321  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.093519  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 11:50:14.093897  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.094331  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.094375  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.094689  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.094875  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.127852  384965 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:50:14.129579  384965 start.go:298] selected driver: kvm2
	I1002 11:50:14.129589  384965 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.129734  384965 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:14.130441  384965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.130517  384965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:50:14.145313  384965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:50:14.145678  384965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:14.145737  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:50:14.145747  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:50:14.145754  384965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-77799
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.145885  384965 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.147697  384965 out.go:177] * Starting control plane node default-k8s-diff-port-777999 in cluster default-k8s-diff-port-777999
	I1002 11:50:14.518571  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:14.149188  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:50:14.149229  384965 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:50:14.149243  384965 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:14.149342  384965 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:50:14.149355  384965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:50:14.149469  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:50:14.149690  384965 start.go:365] acquiring machines lock for default-k8s-diff-port-777999: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:50:17.590603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:23.670608  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:26.742637  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:32.822640  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:35.894704  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:41.974682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:45.046703  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:51.126633  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:54.198624  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:00.278622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:03.350650  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:09.430627  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:12.502639  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:18.582668  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:21.654622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:27.734588  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:30.806674  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:36.886711  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:39.958677  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:46.038638  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:49.110583  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:55.190669  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:58.262632  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:04.342658  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:07.414733  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:13.494648  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:16.566610  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:22.646664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:25.718682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:31.798673  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:34.870620  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:40.950664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:44.022695  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:50.102629  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:53.174698  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:59.254603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:02.326684  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:08.406661  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:11.478769  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:17.558670  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:20.630696  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:26.710600  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:29.782676  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:35.862655  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:38.867149  384505 start.go:369] acquired machines lock for "old-k8s-version-749860" in 4m24.621828644s
	I1002 11:53:38.867251  384505 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:38.867260  384505 fix.go:54] fixHost starting: 
	I1002 11:53:38.867725  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:38.867761  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:38.882900  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1002 11:53:38.883484  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:38.883950  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:53:38.883974  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:38.884318  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:38.884530  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:38.884688  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:53:38.886067  384505 fix.go:102] recreateIfNeeded on old-k8s-version-749860: state=Stopped err=<nil>
	I1002 11:53:38.886102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	W1002 11:53:38.886288  384505 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:38.888401  384505 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-749860" ...
	I1002 11:53:38.889752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Start
	I1002 11:53:38.889924  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring networks are active...
	I1002 11:53:38.890638  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network default is active
	I1002 11:53:38.890980  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network mk-old-k8s-version-749860 is active
	I1002 11:53:38.891314  384505 main.go:141] libmachine: (old-k8s-version-749860) Getting domain xml...
	I1002 11:53:38.892257  384505 main.go:141] libmachine: (old-k8s-version-749860) Creating domain...
	I1002 11:53:38.864675  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:38.864716  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:53:38.866979  384344 machine.go:91] provisioned docker machine in 4m37.398507067s
	I1002 11:53:38.867033  384344 fix.go:56] fixHost completed within 4m37.419547722s
	I1002 11:53:38.867039  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 4m37.419568347s
	W1002 11:53:38.867080  384344 start.go:688] error starting host: provision: host is not running
	W1002 11:53:38.867230  384344 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:53:38.867240  384344 start.go:703] Will try again in 5 seconds ...
	I1002 11:53:40.120018  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting to get IP...
	I1002 11:53:40.120927  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.121258  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.121366  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.121241  385500 retry.go:31] will retry after 204.223254ms: waiting for machine to come up
	I1002 11:53:40.326895  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.327332  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.327351  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.327293  385500 retry.go:31] will retry after 300.58131ms: waiting for machine to come up
	I1002 11:53:40.629931  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.630293  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.630324  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.630247  385500 retry.go:31] will retry after 460.804681ms: waiting for machine to come up
	I1002 11:53:41.092440  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.092887  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.092914  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.092838  385500 retry.go:31] will retry after 573.592817ms: waiting for machine to come up
	I1002 11:53:41.668507  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.668916  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.668955  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.668879  385500 retry.go:31] will retry after 647.261387ms: waiting for machine to come up
	I1002 11:53:42.317738  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.318193  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.318228  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.318135  385500 retry.go:31] will retry after 643.115699ms: waiting for machine to come up
	I1002 11:53:42.963169  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.963572  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.963595  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.963517  385500 retry.go:31] will retry after 1.059074571s: waiting for machine to come up
	I1002 11:53:44.024372  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:44.024750  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:44.024785  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:44.024703  385500 retry.go:31] will retry after 1.142402067s: waiting for machine to come up
	I1002 11:53:43.868857  384344 start.go:365] acquiring machines lock for no-preload-304121: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:53:45.169146  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:45.169470  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:45.169509  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:45.169430  385500 retry.go:31] will retry after 1.244757741s: waiting for machine to come up
	I1002 11:53:46.415640  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:46.416049  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:46.416078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:46.416030  385500 retry.go:31] will retry after 2.066150597s: waiting for machine to come up
	I1002 11:53:48.483477  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:48.483998  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:48.484023  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:48.483921  385500 retry.go:31] will retry after 2.521584671s: waiting for machine to come up
	I1002 11:53:51.008090  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:51.008535  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:51.008565  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:51.008455  385500 retry.go:31] will retry after 2.896131667s: waiting for machine to come up
	I1002 11:53:53.905835  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:53.906274  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:53.906309  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:53.906207  385500 retry.go:31] will retry after 3.463250216s: waiting for machine to come up
	I1002 11:53:58.755219  384787 start.go:369] acquired machines lock for "embed-certs-487027" in 4m10.971064405s
	I1002 11:53:58.755286  384787 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:58.755301  384787 fix.go:54] fixHost starting: 
	I1002 11:53:58.755691  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:58.755733  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:58.772186  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38267
	I1002 11:53:58.772591  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:58.773071  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:53:58.773101  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:58.773409  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:58.773585  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:53:58.773710  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:53:58.775231  384787 fix.go:102] recreateIfNeeded on embed-certs-487027: state=Stopped err=<nil>
	I1002 11:53:58.775273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	W1002 11:53:58.775449  384787 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:58.778132  384787 out.go:177] * Restarting existing kvm2 VM for "embed-certs-487027" ...
	I1002 11:53:57.373844  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374176  384505 main.go:141] libmachine: (old-k8s-version-749860) Found IP for machine: 192.168.83.82
	I1002 11:53:57.374195  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserving static IP address...
	I1002 11:53:57.374208  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has current primary IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374680  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.374711  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | skip adding static IP to network mk-old-k8s-version-749860 - found existing host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"}
	I1002 11:53:57.374725  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserved static IP address: 192.168.83.82
	I1002 11:53:57.374741  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting for SSH to be available...
	I1002 11:53:57.374758  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Getting to WaitForSSH function...
	I1002 11:53:57.377368  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377757  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.377791  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377890  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH client type: external
	I1002 11:53:57.377933  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa (-rw-------)
	I1002 11:53:57.377976  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:53:57.377995  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | About to run SSH command:
	I1002 11:53:57.378008  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | exit 0
	I1002 11:53:57.474496  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | SSH cmd err, output: <nil>: 
	I1002 11:53:57.474881  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetConfigRaw
	I1002 11:53:57.475581  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.478078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478423  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.478464  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478679  384505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:53:57.478876  384505 machine.go:88] provisioning docker machine ...
	I1002 11:53:57.478895  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:57.479118  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479286  384505 buildroot.go:166] provisioning hostname "old-k8s-version-749860"
	I1002 11:53:57.479300  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479509  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.481462  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481768  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.481805  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481935  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.482138  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482280  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.482611  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.483038  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.483051  384505 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-749860 && echo "old-k8s-version-749860" | sudo tee /etc/hostname
	I1002 11:53:57.622724  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-749860
	
	I1002 11:53:57.622760  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.626222  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626663  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.626707  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626840  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.627102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627297  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627513  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.627678  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.628068  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.628089  384505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-749860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-749860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-749860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:53:57.767587  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:57.767664  384505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:53:57.767708  384505 buildroot.go:174] setting up certificates
	I1002 11:53:57.767721  384505 provision.go:83] configureAuth start
	I1002 11:53:57.767734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.768045  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.771158  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771591  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.771620  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771825  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.774031  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774444  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.774523  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774529  384505 provision.go:138] copyHostCerts
	I1002 11:53:57.774608  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:53:57.774623  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:53:57.774695  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:53:57.774787  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:53:57.774797  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:53:57.774821  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:53:57.774884  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:53:57.774891  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:53:57.774912  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:53:57.774970  384505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-749860 san=[192.168.83.82 192.168.83.82 localhost 127.0.0.1 minikube old-k8s-version-749860]
	I1002 11:53:58.003098  384505 provision.go:172] copyRemoteCerts
	I1002 11:53:58.003163  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:53:58.003190  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.005944  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006310  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.006345  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006482  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.006734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.006887  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.007049  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.099927  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:53:58.123424  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:53:58.147578  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:53:58.171190  384505 provision.go:86] duration metric: configureAuth took 403.448571ms
	I1002 11:53:58.171228  384505 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:53:58.171440  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:53:58.171575  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.174314  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174684  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.174723  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174860  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.175078  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175274  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175409  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.175596  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.175908  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.175923  384505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:53:58.491028  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:53:58.491062  384505 machine.go:91] provisioned docker machine in 1.012168334s
	I1002 11:53:58.491072  384505 start.go:300] post-start starting for "old-k8s-version-749860" (driver="kvm2")
	I1002 11:53:58.491085  384505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:53:58.491106  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.491521  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:53:58.491558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.494009  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494382  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.494415  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494546  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.494753  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.494903  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.495037  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.588465  384505 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:53:58.592844  384505 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:53:58.592872  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:53:58.592940  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:53:58.593047  384505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:53:58.593171  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:53:58.601583  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:53:58.624453  384505 start.go:303] post-start completed in 133.365398ms
	I1002 11:53:58.624486  384505 fix.go:56] fixHost completed within 19.757224844s
	I1002 11:53:58.624511  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.627104  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627476  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.627534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627695  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.627913  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628105  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628253  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.628426  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.628749  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.628762  384505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:53:58.755032  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247638.703145377
	
	I1002 11:53:58.755056  384505 fix.go:206] guest clock: 1696247638.703145377
	I1002 11:53:58.755066  384505 fix.go:219] Guest: 2023-10-02 11:53:58.703145377 +0000 UTC Remote: 2023-10-02 11:53:58.624490602 +0000 UTC m=+284.515069275 (delta=78.654775ms)
	I1002 11:53:58.755092  384505 fix.go:190] guest clock delta is within tolerance: 78.654775ms
	I1002 11:53:58.755098  384505 start.go:83] releasing machines lock for "old-k8s-version-749860", held for 19.887910329s
	I1002 11:53:58.755126  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.755438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:58.758172  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758431  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.758467  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758673  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759288  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759466  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759560  384505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:53:58.759620  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.759717  384505 ssh_runner.go:195] Run: cat /version.json
	I1002 11:53:58.759748  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.762471  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762618  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762847  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762879  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762911  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762943  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.763162  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763185  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763347  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763363  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763487  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763661  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.763671  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763828  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.880436  384505 ssh_runner.go:195] Run: systemctl --version
	I1002 11:53:58.886540  384505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:53:59.035347  384505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:53:59.041510  384505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:53:59.041604  384505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:53:59.056030  384505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:53:59.056062  384505 start.go:469] detecting cgroup driver to use...
	I1002 11:53:59.056147  384505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:53:59.068680  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:53:59.080770  384505 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:53:59.080823  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:53:59.093059  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:53:59.106603  384505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:53:59.223135  384505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:53:59.364085  384505 docker.go:213] disabling docker service ...
	I1002 11:53:59.364161  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:53:59.378131  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:53:59.390380  384505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:53:59.522236  384505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:53:59.663336  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:53:59.677221  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:53:59.694283  384505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:53:59.694380  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.703409  384505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:53:59.703481  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.712316  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.721255  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.731204  384505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:53:59.741152  384505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:53:59.748978  384505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:53:59.749036  384505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:53:59.761692  384505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:53:59.770571  384505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:53:59.882809  384505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:00.046741  384505 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:00.046843  384505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:00.051911  384505 start.go:537] Will wait 60s for crictl version
	I1002 11:54:00.051988  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:00.055847  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:00.099999  384505 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:00.100084  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.155271  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.202213  384505 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1002 11:53:58.780030  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Start
	I1002 11:53:58.780201  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring networks are active...
	I1002 11:53:58.780857  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network default is active
	I1002 11:53:58.781206  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network mk-embed-certs-487027 is active
	I1002 11:53:58.781581  384787 main.go:141] libmachine: (embed-certs-487027) Getting domain xml...
	I1002 11:53:58.782269  384787 main.go:141] libmachine: (embed-certs-487027) Creating domain...
	I1002 11:54:00.079808  384787 main.go:141] libmachine: (embed-certs-487027) Waiting to get IP...
	I1002 11:54:00.080676  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.081052  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.081202  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.081070  385615 retry.go:31] will retry after 291.88616ms: waiting for machine to come up
	I1002 11:54:00.374941  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.375493  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.375526  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.375441  385615 retry.go:31] will retry after 315.924643ms: waiting for machine to come up
	I1002 11:54:00.693196  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.693804  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.693840  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.693754  385615 retry.go:31] will retry after 473.967353ms: waiting for machine to come up
	I1002 11:54:01.169616  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.170137  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.170168  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.170099  385615 retry.go:31] will retry after 490.884713ms: waiting for machine to come up
	I1002 11:54:01.662881  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.663427  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.663459  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.663380  385615 retry.go:31] will retry after 590.285109ms: waiting for machine to come up
	I1002 11:54:02.255409  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.256020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.256048  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.255956  385615 retry.go:31] will retry after 586.734935ms: waiting for machine to come up
	I1002 11:54:00.203709  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:54:00.206822  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207269  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:54:00.207308  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207533  384505 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:00.211596  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:00.224503  384505 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:54:00.224558  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:00.267915  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:00.267986  384505 ssh_runner.go:195] Run: which lz4
	I1002 11:54:00.272086  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:00.276281  384505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:00.276322  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1002 11:54:02.169153  384505 crio.go:444] Took 1.897111 seconds to copy over tarball
	I1002 11:54:02.169248  384505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:02.844615  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.845091  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.845129  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.845049  385615 retry.go:31] will retry after 765.906555ms: waiting for machine to come up
	I1002 11:54:03.612904  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:03.613374  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:03.613515  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:03.613306  385615 retry.go:31] will retry after 1.240249135s: waiting for machine to come up
	I1002 11:54:04.855370  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:04.855832  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:04.855858  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:04.855785  385615 retry.go:31] will retry after 1.741253702s: waiting for machine to come up
	I1002 11:54:06.599800  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:06.600279  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:06.600307  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:06.600221  385615 retry.go:31] will retry after 1.945988456s: waiting for machine to come up
	I1002 11:54:05.257359  384505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088072266s)
	I1002 11:54:05.257395  384505 crio.go:451] Took 3.088214 seconds to extract the tarball
	I1002 11:54:05.257408  384505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:05.296693  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:05.347131  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:05.347156  384505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:54:05.347231  384505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.347239  384505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.347291  384505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.347523  384505 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.347545  384505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.347590  384505 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:54:05.347712  384505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.347797  384505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349061  384505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.349109  384505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.349136  384505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.349165  384505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349072  384505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.349076  384505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.349075  384505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.349490  384505 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:54:05.494581  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.497665  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.499676  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.503426  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 11:54:05.504502  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.507776  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.511534  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.589967  384505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1002 11:54:05.590038  384505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.590101  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.653382  384505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1002 11:54:05.653450  384505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.653539  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674391  384505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1002 11:54:05.674430  384505 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 11:54:05.674447  384505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.674467  384505 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1002 11:54:05.674508  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674498  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674583  384505 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1002 11:54:05.674621  384505 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.674671  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.676359  384505 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1002 11:54:05.676390  384505 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.676425  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.680824  384505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1002 11:54:05.680858  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.680871  384505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.680894  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.680905  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.682827  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1002 11:54:05.690404  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.690496  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.690562  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.810224  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1002 11:54:05.840439  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1002 11:54:05.840472  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.840535  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:54:05.840544  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1002 11:54:05.840583  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1002 11:54:05.840643  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.840663  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1002 11:54:05.874997  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1002 11:54:05.875049  384505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1002 11:54:05.875079  384505 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.875136  384505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1002 11:54:06.317119  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:07.926701  384505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.609537315s)
	I1002 11:54:07.926715  384505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.051548545s)
	I1002 11:54:07.926786  384505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 11:54:07.926855  384505 cache_images.go:92] LoadImages completed in 2.579686998s
	W1002 11:54:07.926953  384505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1002 11:54:07.927077  384505 ssh_runner.go:195] Run: crio config
	I1002 11:54:07.991410  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:07.991433  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:07.991452  384505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:07.991473  384505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.82 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-749860 NodeName:old-k8s-version-749860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:54:07.991665  384505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-749860"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-749860
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.82:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:07.991752  384505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-749860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:07.991814  384505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1002 11:54:08.002239  384505 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:08.002313  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:08.012375  384505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1002 11:54:08.031554  384505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:08.050801  384505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1002 11:54:08.068326  384505 ssh_runner.go:195] Run: grep 192.168.83.82	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:08.072798  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:08.085261  384505 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860 for IP: 192.168.83.82
	I1002 11:54:08.085320  384505 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:08.085511  384505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:08.085555  384505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:08.085682  384505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/client.key
	I1002 11:54:08.085771  384505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key.bc78c23c
	I1002 11:54:08.085823  384505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key
	I1002 11:54:08.085973  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:08.086020  384505 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:08.086035  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:08.086071  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:08.086101  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:08.086163  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:08.086237  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:08.087038  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:08.111230  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:08.133515  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:08.157382  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:08.180186  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:08.210075  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:08.232068  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:08.253873  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:08.276866  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:08.300064  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:08.322265  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:08.346808  384505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:08.367194  384505 ssh_runner.go:195] Run: openssl version
	I1002 11:54:08.374709  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:08.389274  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395338  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395420  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.401338  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:08.412228  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:08.423293  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428146  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428213  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.434177  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:08.449342  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:08.463678  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468723  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468795  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.476711  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:08.492116  384505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:08.498510  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:08.504961  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:08.513012  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:08.520620  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:08.528578  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:08.534685  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:08.541262  384505 kubeadm.go:404] StartCluster: {Name:old-k8s-version-749860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:08.541401  384505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:08.541474  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:08.579821  384505 cri.go:89] found id: ""
	I1002 11:54:08.579899  384505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:08.590328  384505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:08.590359  384505 kubeadm.go:636] restartCluster start
	I1002 11:54:08.590419  384505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:08.600034  384505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.601660  384505 kubeconfig.go:92] found "old-k8s-version-749860" server: "https://192.168.83.82:8443"
	I1002 11:54:08.605641  384505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:08.615274  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.615340  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.630952  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.630979  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.631032  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.642433  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.547687  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:08.548295  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:08.548331  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:08.548238  385615 retry.go:31] will retry after 2.817726625s: waiting for machine to come up
	I1002 11:54:11.367346  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:11.367909  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:11.367943  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:11.367859  385615 retry.go:31] will retry after 3.066326625s: waiting for machine to come up
	I1002 11:54:09.142569  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.155937  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:09.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.642637  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.655230  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.142683  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.142769  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.155206  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.642757  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.642857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.659345  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.142860  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.142955  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.158336  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.642849  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.642934  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.658819  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.143538  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.143645  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.159984  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.642679  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.658031  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.143496  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.159279  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.643567  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.643659  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.657189  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.435299  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:14.435744  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:14.435777  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:14.435699  385615 retry.go:31] will retry after 3.446313194s: waiting for machine to come up
	I1002 11:54:19.007568  384965 start.go:369] acquired machines lock for "default-k8s-diff-port-777999" in 4m4.857829673s
	I1002 11:54:19.007726  384965 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:19.007735  384965 fix.go:54] fixHost starting: 
	I1002 11:54:19.008181  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:19.008225  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:19.025286  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1002 11:54:19.025755  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:19.026243  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:54:19.026265  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:19.026648  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:19.026869  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:19.027056  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:54:19.028773  384965 fix.go:102] recreateIfNeeded on default-k8s-diff-port-777999: state=Stopped err=<nil>
	I1002 11:54:19.028799  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	W1002 11:54:19.028984  384965 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:19.031466  384965 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-777999" ...
	I1002 11:54:19.033140  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Start
	I1002 11:54:19.033346  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring networks are active...
	I1002 11:54:19.034009  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network default is active
	I1002 11:54:19.034440  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network mk-default-k8s-diff-port-777999 is active
	I1002 11:54:19.034843  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Getting domain xml...
	I1002 11:54:19.035519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Creating domain...
	I1002 11:54:14.142550  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.142618  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.154742  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.643429  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.643522  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.656075  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.142577  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.142669  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.154422  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.643360  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.643450  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.655255  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.142806  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.142948  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.154896  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.643505  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.643581  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.655413  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.142981  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.143087  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.156411  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.642996  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.643100  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.656886  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.143481  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:18.143563  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:18.157184  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.616095  384505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:18.616128  384505 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:18.616142  384505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:18.616204  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:18.654952  384505 cri.go:89] found id: ""
	I1002 11:54:18.655033  384505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:18.674155  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:18.685052  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:18.685116  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695816  384505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695844  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:18.821270  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:17.886333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.886895  384787 main.go:141] libmachine: (embed-certs-487027) Found IP for machine: 192.168.72.147
	I1002 11:54:17.886926  384787 main.go:141] libmachine: (embed-certs-487027) Reserving static IP address...
	I1002 11:54:17.886947  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has current primary IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.887365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.887396  384787 main.go:141] libmachine: (embed-certs-487027) DBG | skip adding static IP to network mk-embed-certs-487027 - found existing host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"}
	I1002 11:54:17.887404  384787 main.go:141] libmachine: (embed-certs-487027) Reserved static IP address: 192.168.72.147
	I1002 11:54:17.887420  384787 main.go:141] libmachine: (embed-certs-487027) Waiting for SSH to be available...
	I1002 11:54:17.887437  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Getting to WaitForSSH function...
	I1002 11:54:17.889775  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890175  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.890214  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890410  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH client type: external
	I1002 11:54:17.890434  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa (-rw-------)
	I1002 11:54:17.890470  384787 main.go:141] libmachine: (embed-certs-487027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:17.890502  384787 main.go:141] libmachine: (embed-certs-487027) DBG | About to run SSH command:
	I1002 11:54:17.890514  384787 main.go:141] libmachine: (embed-certs-487027) DBG | exit 0
	I1002 11:54:17.974015  384787 main.go:141] libmachine: (embed-certs-487027) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:17.974444  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetConfigRaw
	I1002 11:54:17.975209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:17.977468  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.977798  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.977837  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.978016  384787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:54:17.978201  384787 machine.go:88] provisioning docker machine ...
	I1002 11:54:17.978220  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:17.978460  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978651  384787 buildroot.go:166] provisioning hostname "embed-certs-487027"
	I1002 11:54:17.978669  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:17.980872  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981298  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.981333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981395  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:17.981587  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981746  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981885  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:17.982020  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:17.982399  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:17.982413  384787 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-487027 && echo "embed-certs-487027" | sudo tee /etc/hostname
	I1002 11:54:18.103274  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-487027
	
	I1002 11:54:18.103311  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.106230  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106654  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.106709  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106847  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.107082  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107266  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107400  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.107589  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.108051  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.108081  384787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-487027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-487027/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-487027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:18.222398  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:18.222431  384787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:18.222453  384787 buildroot.go:174] setting up certificates
	I1002 11:54:18.222488  384787 provision.go:83] configureAuth start
	I1002 11:54:18.222500  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:18.222817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:18.225631  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.226150  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226262  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.228719  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229096  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.229130  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229268  384787 provision.go:138] copyHostCerts
	I1002 11:54:18.229336  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:18.229351  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:18.229399  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:18.229480  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:18.229492  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:18.229511  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:18.229563  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:18.229570  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:18.229586  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:18.229630  384787 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-487027 san=[192.168.72.147 192.168.72.147 localhost 127.0.0.1 minikube embed-certs-487027]
	I1002 11:54:18.296130  384787 provision.go:172] copyRemoteCerts
	I1002 11:54:18.296187  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:18.296212  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.298721  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299036  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.299059  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299181  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.299363  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.299479  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.299628  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.384449  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:18.406096  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:18.427407  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 11:54:18.448829  384787 provision.go:86] duration metric: configureAuth took 226.314252ms
	I1002 11:54:18.448858  384787 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:18.449065  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:18.449178  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.451995  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.452405  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452596  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.452786  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.452958  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.453077  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.453213  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.453571  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.453606  384787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:18.754879  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:18.754913  384787 machine.go:91] provisioned docker machine in 776.69782ms
	I1002 11:54:18.754927  384787 start.go:300] post-start starting for "embed-certs-487027" (driver="kvm2")
	I1002 11:54:18.754941  384787 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:18.754966  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:18.755361  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:18.755392  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.758184  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758644  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.758700  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758788  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.758981  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.759149  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.759414  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.847614  384787 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:18.851792  384787 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:18.851821  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:18.851911  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:18.852023  384787 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:18.852152  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:18.861415  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:18.883190  384787 start.go:303] post-start completed in 128.242372ms
	I1002 11:54:18.883222  384787 fix.go:56] fixHost completed within 20.127922888s
	I1002 11:54:18.883249  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.885771  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.886141  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886335  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.886598  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886784  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886922  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.887111  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.887556  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.887574  384787 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:19.007352  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247658.948838951
	
	I1002 11:54:19.007388  384787 fix.go:206] guest clock: 1696247658.948838951
	I1002 11:54:19.007404  384787 fix.go:219] Guest: 2023-10-02 11:54:18.948838951 +0000 UTC Remote: 2023-10-02 11:54:18.883226893 +0000 UTC m=+271.237550126 (delta=65.612058ms)
	I1002 11:54:19.007464  384787 fix.go:190] guest clock delta is within tolerance: 65.612058ms
	I1002 11:54:19.007471  384787 start.go:83] releasing machines lock for "embed-certs-487027", held for 20.25221392s
	I1002 11:54:19.007510  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.007831  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:19.011020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011386  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.011418  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011602  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012303  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012520  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012602  384787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:19.012660  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.012946  384787 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:19.012976  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.015652  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.015935  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016016  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016063  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016284  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016411  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016439  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016482  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016638  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016653  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.016868  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016871  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.017017  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.017199  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.124634  384787 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:19.130340  384787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:19.278814  384787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:19.284549  384787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:19.284618  384787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:19.300872  384787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:19.300896  384787 start.go:469] detecting cgroup driver to use...
	I1002 11:54:19.300984  384787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:19.314898  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:19.327762  384787 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:19.327826  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:19.341164  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:19.354542  384787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:19.469125  384787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:19.581195  384787 docker.go:213] disabling docker service ...
	I1002 11:54:19.581260  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:19.595222  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:19.607587  384787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:19.725376  384787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:19.828507  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:19.845782  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:19.868464  384787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:19.868530  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.881554  384787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:19.881633  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.894090  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.905922  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.918336  384787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:19.931259  384787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:19.939861  384787 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:19.939925  384787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:19.954089  384787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:19.966438  384787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:20.124666  384787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:20.329505  384787 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:20.329602  384787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:20.336428  384787 start.go:537] Will wait 60s for crictl version
	I1002 11:54:20.336499  384787 ssh_runner.go:195] Run: which crictl
	I1002 11:54:20.343269  384787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:20.386249  384787 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:20.386331  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.429634  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.476699  384787 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:20.478035  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:20.480720  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481028  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:20.481054  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481230  384787 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:20.485387  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:20.496957  384787 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:20.497028  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:20.539655  384787 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:20.539731  384787 ssh_runner.go:195] Run: which lz4
	I1002 11:54:20.543869  384787 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:20.548080  384787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:20.548112  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:22.411067  384787 crio.go:444] Took 1.867223 seconds to copy over tarball
	I1002 11:54:22.411155  384787 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:20.416319  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting to get IP...
	I1002 11:54:20.417168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417613  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.417539  385761 retry.go:31] will retry after 211.341658ms: waiting for machine to come up
	I1002 11:54:20.631097  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.631841  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.632011  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.631972  385761 retry.go:31] will retry after 257.651992ms: waiting for machine to come up
	I1002 11:54:20.891519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892077  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892111  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.892047  385761 retry.go:31] will retry after 295.599576ms: waiting for machine to come up
	I1002 11:54:21.189739  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190333  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190389  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.190275  385761 retry.go:31] will retry after 532.182463ms: waiting for machine to come up
	I1002 11:54:21.723822  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724414  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724443  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.724314  385761 retry.go:31] will retry after 576.235756ms: waiting for machine to come up
	I1002 11:54:22.301975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302566  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302600  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:22.302479  385761 retry.go:31] will retry after 913.441142ms: waiting for machine to come up
	I1002 11:54:23.217419  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217905  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217943  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:23.217839  385761 retry.go:31] will retry after 1.089960204s: waiting for machine to come up
	I1002 11:54:19.625761  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.857853  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.977490  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:20.080170  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:20.080294  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.097093  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.611090  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.110857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.610499  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.111420  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.138171  384505 api_server.go:72] duration metric: took 2.057999603s to wait for apiserver process to appear ...
	I1002 11:54:22.138201  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:22.138224  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:25.604442  384787 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193244457s)
	I1002 11:54:25.604543  384787 crio.go:451] Took 3.193443 seconds to extract the tarball
	I1002 11:54:25.604568  384787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:25.660515  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:25.723308  384787 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:25.723339  384787 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:25.723436  384787 ssh_runner.go:195] Run: crio config
	I1002 11:54:25.781690  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:25.781722  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:25.781748  384787 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:25.781775  384787 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-487027 NodeName:embed-certs-487027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:25.782020  384787 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-487027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:25.782125  384787 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-487027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:25.782183  384787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:25.791322  384787 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:25.791398  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:25.799709  384787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 11:54:25.818900  384787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:25.836913  384787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1002 11:54:25.856201  384787 ssh_runner.go:195] Run: grep 192.168.72.147	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:25.859962  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:25.872776  384787 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027 for IP: 192.168.72.147
	I1002 11:54:25.872818  384787 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:25.873061  384787 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:25.873125  384787 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:25.873225  384787 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/client.key
	I1002 11:54:25.873312  384787 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key.b24df18b
	I1002 11:54:25.873375  384787 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key
	I1002 11:54:25.873530  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:25.873590  384787 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:25.873602  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:25.873633  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:25.873667  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:25.873702  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:25.873757  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:25.874732  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:25.901588  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:25.929381  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:25.955358  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:25.980414  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:26.008652  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:26.038061  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:26.067828  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:26.098717  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:26.131030  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:26.162989  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:26.189458  384787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:26.206791  384787 ssh_runner.go:195] Run: openssl version
	I1002 11:54:26.214436  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:26.226064  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231428  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231504  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.238070  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:26.252779  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:26.267263  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272245  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272316  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.278088  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:26.289430  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:26.300788  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305731  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305812  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.311712  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:26.322855  384787 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:26.328688  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:26.336570  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:26.344412  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:26.350583  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:26.356815  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:26.364674  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:26.372219  384787 kubeadm.go:404] StartCluster: {Name:embed-certs-487027 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:26.372341  384787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:26.372397  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:26.424018  384787 cri.go:89] found id: ""
	I1002 11:54:26.424131  384787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:26.435493  384787 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:26.435520  384787 kubeadm.go:636] restartCluster start
	I1002 11:54:26.435583  384787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:26.447429  384787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.448848  384787 kubeconfig.go:92] found "embed-certs-487027" server: "https://192.168.72.147:8443"
	I1002 11:54:26.452474  384787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:26.462854  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.462924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.475723  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.475751  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.475803  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.488962  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.989693  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.989776  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.002889  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:27.489487  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.489589  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.503912  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:24.308867  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309362  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:24.309326  385761 retry.go:31] will retry after 1.381170872s: waiting for machine to come up
	I1002 11:54:25.691931  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692285  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692386  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:25.692267  385761 retry.go:31] will retry after 1.748966707s: waiting for machine to come up
	I1002 11:54:27.442708  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443145  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443171  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:27.443107  385761 retry.go:31] will retry after 2.105420589s: waiting for machine to come up
	I1002 11:54:27.138701  384505 api_server.go:269] stopped: https://192.168.83.82:8443/healthz: Get "https://192.168.83.82:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 11:54:27.138757  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.249499  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:28.249540  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:28.750389  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.756351  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:28.756390  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.250308  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.257228  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:29.257264  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.750123  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.758475  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 11:54:29.769049  384505 api_server.go:141] control plane version: v1.16.0
	I1002 11:54:29.769079  384505 api_server.go:131] duration metric: took 7.630868963s to wait for apiserver health ...
	I1002 11:54:29.769098  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:29.769107  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:29.770969  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:27.989735  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.989861  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.007059  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.489495  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.489605  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.505845  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.989879  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.989963  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.004220  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.489847  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.489949  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.502986  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.989170  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.989264  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.006850  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.489389  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.489504  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.502094  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.989302  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.989399  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.005902  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.489967  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.490080  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.503748  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.989317  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.989405  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.003288  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:32.489803  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.489924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.506744  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.550027  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550550  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:29.550488  385761 retry.go:31] will retry after 2.509962026s: waiting for machine to come up
	I1002 11:54:32.063392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063862  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063887  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:32.063834  385761 retry.go:31] will retry after 2.845339865s: waiting for machine to come up
	I1002 11:54:29.772611  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:29.786551  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:29.807894  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:29.818837  384505 system_pods.go:59] 7 kube-system pods found
	I1002 11:54:29.818890  384505 system_pods.go:61] "coredns-5644d7b6d9-9xdpq" [2d10c772-e2f0-4bfc-9795-0721f8bab31c] Running
	I1002 11:54:29.818901  384505 system_pods.go:61] "etcd-old-k8s-version-749860" [5826895a-f14d-43ab-9f22-edad964d4a8e] Running
	I1002 11:54:29.818910  384505 system_pods.go:61] "kube-apiserver-old-k8s-version-749860" [3418ba32-aa28-4587-a231-b1f218181e71] Running
	I1002 11:54:29.818919  384505 system_pods.go:61] "kube-controller-manager-old-k8s-version-749860" [e42ff4c0-2ec4-45b9-8189-6a225c79f5c6] Running
	I1002 11:54:29.818927  384505 system_pods.go:61] "kube-proxy-gkhxb" [b3675678-e1cf-4d86-82d9-9e068bd1ba19] Running
	I1002 11:54:29.818939  384505 system_pods.go:61] "kube-scheduler-old-k8s-version-749860" [53a1c8a7-ec6d-4d47-a980-8cfab71ad467] Running
	I1002 11:54:29.818948  384505 system_pods.go:61] "storage-provisioner" [e73d6f24-1392-40ca-b37d-03c035734d1d] Running
	I1002 11:54:29.818964  384505 system_pods.go:74] duration metric: took 11.044895ms to wait for pod list to return data ...
	I1002 11:54:29.818980  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:29.822392  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:29.822455  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:29.822472  384505 node_conditions.go:105] duration metric: took 3.48317ms to run NodePressure ...
	I1002 11:54:29.822520  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:30.106960  384505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:30.111692  384505 retry.go:31] will retry after 218.727225ms: kubelet not initialised
	I1002 11:54:30.336456  384505 retry.go:31] will retry after 524.868139ms: kubelet not initialised
	I1002 11:54:30.867554  384505 retry.go:31] will retry after 427.897694ms: kubelet not initialised
	I1002 11:54:31.301616  384505 retry.go:31] will retry after 722.780158ms: kubelet not initialised
	I1002 11:54:32.029512  384505 retry.go:31] will retry after 1.205429819s: kubelet not initialised
	I1002 11:54:33.253735  384505 retry.go:31] will retry after 1.476521325s: kubelet not initialised
	I1002 11:54:32.989607  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.989718  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.004745  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.489141  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.489215  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.506018  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.990120  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.990217  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.005050  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.489520  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.489608  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.501965  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.989481  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.989584  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.002635  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.489123  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.489199  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.502995  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.989474  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.989565  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:36.003010  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:36.463582  384787 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:36.463614  384787 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:36.463628  384787 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:36.463689  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:36.503915  384787 cri.go:89] found id: ""
	I1002 11:54:36.503982  384787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:36.519603  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:36.529026  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:36.529086  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538424  384787 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538451  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:36.670492  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:34.910513  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911092  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:34.911030  385761 retry.go:31] will retry after 3.250805502s: waiting for machine to come up
	I1002 11:54:38.163585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Found IP for machine: 192.168.61.251
	I1002 11:54:38.164104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has current primary IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164124  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserving static IP address...
	I1002 11:54:38.164549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.164588  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | skip adding static IP to network mk-default-k8s-diff-port-777999 - found existing host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"}
	I1002 11:54:38.164604  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserved static IP address: 192.168.61.251
	I1002 11:54:38.164623  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for SSH to be available...
	I1002 11:54:38.164639  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Getting to WaitForSSH function...
	I1002 11:54:38.166901  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167279  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.167313  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167579  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH client type: external
	I1002 11:54:38.167610  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa (-rw-------)
	I1002 11:54:38.167649  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:38.167671  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | About to run SSH command:
	I1002 11:54:38.167694  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | exit 0
	I1002 11:54:38.274617  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:38.275081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetConfigRaw
	I1002 11:54:38.275836  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.278750  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279150  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.279193  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279391  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:54:38.279621  384965 machine.go:88] provisioning docker machine ...
	I1002 11:54:38.279646  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:38.279886  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280069  384965 buildroot.go:166] provisioning hostname "default-k8s-diff-port-777999"
	I1002 11:54:38.280094  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280253  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.282736  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.283136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283230  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.283399  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283578  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283733  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.283892  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.284295  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.284312  384965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-777999 && echo "default-k8s-diff-port-777999" | sudo tee /etc/hostname
	I1002 11:54:38.443082  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-777999
	
	I1002 11:54:38.443200  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.446493  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447061  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.447106  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447288  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.447549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447737  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447899  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.448132  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.448554  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.448586  384965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-777999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-777999/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-777999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:38.594884  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:38.594920  384965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:38.594956  384965 buildroot.go:174] setting up certificates
	I1002 11:54:38.594975  384965 provision.go:83] configureAuth start
	I1002 11:54:38.594993  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.595325  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.597718  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598053  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.598088  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598217  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.600751  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.601099  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601219  384965 provision.go:138] copyHostCerts
	I1002 11:54:38.601300  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:38.601316  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:38.601393  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:38.601520  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:38.601534  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:38.601565  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:38.601634  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:38.601644  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:38.601670  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:38.601728  384965 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-777999 san=[192.168.61.251 192.168.61.251 localhost 127.0.0.1 minikube default-k8s-diff-port-777999]
	I1002 11:54:38.706714  384965 provision.go:172] copyRemoteCerts
	I1002 11:54:38.706783  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:38.706847  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.709075  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709491  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.709547  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709658  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.709903  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.710087  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.710216  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:38.803103  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:38.825916  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:38.847881  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 11:54:38.873772  384965 provision.go:86] duration metric: configureAuth took 278.777931ms
	I1002 11:54:38.873804  384965 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:38.874066  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:38.874154  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.876864  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877269  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.877304  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877453  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.877666  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877797  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877936  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.878087  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.878441  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.878469  384965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:34.736594  384505 retry.go:31] will retry after 1.866771295s: kubelet not initialised
	I1002 11:54:36.609977  384505 retry.go:31] will retry after 4.83087592s: kubelet not initialised
	I1002 11:54:39.495298  384344 start.go:369] acquired machines lock for "no-preload-304121" in 55.626389891s
	I1002 11:54:39.495355  384344 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:39.495364  384344 fix.go:54] fixHost starting: 
	I1002 11:54:39.495800  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:39.495839  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:39.518491  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1002 11:54:39.518893  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:39.519407  384344 main.go:141] libmachine: Using API Version  1
	I1002 11:54:39.519432  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:39.519757  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:39.519941  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:39.520099  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:54:39.521857  384344 fix.go:102] recreateIfNeeded on no-preload-304121: state=Stopped err=<nil>
	I1002 11:54:39.521885  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	W1002 11:54:39.522058  384344 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:39.524119  384344 out.go:177] * Restarting existing kvm2 VM for "no-preload-304121" ...
	I1002 11:54:39.215761  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:39.215794  384965 machine.go:91] provisioned docker machine in 936.155542ms
	I1002 11:54:39.215807  384965 start.go:300] post-start starting for "default-k8s-diff-port-777999" (driver="kvm2")
	I1002 11:54:39.215822  384965 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:39.215848  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.216265  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:39.216305  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.219032  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219387  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.219418  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219542  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.219748  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.219910  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.220054  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.317075  384965 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:39.321405  384965 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:39.321429  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:39.321505  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:39.321599  384965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:39.321716  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:39.330980  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:39.357830  384965 start.go:303] post-start completed in 142.005546ms
	I1002 11:54:39.357863  384965 fix.go:56] fixHost completed within 20.350127508s
	I1002 11:54:39.357900  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.360232  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.360598  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360768  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.360966  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361139  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361264  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.361425  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:39.361918  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:39.361939  384965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:39.495129  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247679.435720520
	
	I1002 11:54:39.495155  384965 fix.go:206] guest clock: 1696247679.435720520
	I1002 11:54:39.495166  384965 fix.go:219] Guest: 2023-10-02 11:54:39.43572052 +0000 UTC Remote: 2023-10-02 11:54:39.357871423 +0000 UTC m=+265.343763085 (delta=77.849097ms)
	I1002 11:54:39.495194  384965 fix.go:190] guest clock delta is within tolerance: 77.849097ms
	I1002 11:54:39.495206  384965 start.go:83] releasing machines lock for "default-k8s-diff-port-777999", held for 20.487515438s
	I1002 11:54:39.495242  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.495652  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:39.498667  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499055  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.499114  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499370  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.499891  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500060  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500132  384965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:39.500199  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.500539  384965 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:39.500565  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.503388  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503580  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503885  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.503917  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503995  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504000  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.504081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.504281  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504297  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504682  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504680  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.504825  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.623582  384965 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:39.631181  384965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:39.787298  384965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:39.795202  384965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:39.795303  384965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:39.816471  384965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:39.816495  384965 start.go:469] detecting cgroup driver to use...
	I1002 11:54:39.816567  384965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:39.836594  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:39.852798  384965 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:39.852911  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:39.868676  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:39.885480  384965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:40.003441  384965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:40.146812  384965 docker.go:213] disabling docker service ...
	I1002 11:54:40.146916  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:40.163451  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:40.178327  384965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:40.339579  384965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:40.463502  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:40.476402  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:40.499021  384965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:40.499117  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.511680  384965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:40.511752  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.524364  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.536675  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.549326  384965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:40.559447  384965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:40.570086  384965 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:40.570157  384965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:40.582938  384965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:40.594250  384965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:40.739528  384965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:40.964248  384965 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:40.964336  384965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:40.969637  384965 start.go:537] Will wait 60s for crictl version
	I1002 11:54:40.969696  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:54:40.974270  384965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:41.016986  384965 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:41.017121  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.061313  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.112139  384965 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:39.525634  384344 main.go:141] libmachine: (no-preload-304121) Calling .Start
	I1002 11:54:39.525802  384344 main.go:141] libmachine: (no-preload-304121) Ensuring networks are active...
	I1002 11:54:39.526566  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network default is active
	I1002 11:54:39.526860  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network mk-no-preload-304121 is active
	I1002 11:54:39.527227  384344 main.go:141] libmachine: (no-preload-304121) Getting domain xml...
	I1002 11:54:39.527942  384344 main.go:141] libmachine: (no-preload-304121) Creating domain...
	I1002 11:54:40.973483  384344 main.go:141] libmachine: (no-preload-304121) Waiting to get IP...
	I1002 11:54:40.974731  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:40.975262  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:40.975359  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:40.975266  385933 retry.go:31] will retry after 231.149062ms: waiting for machine to come up
	I1002 11:54:41.207806  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.208486  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.208522  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.208461  385933 retry.go:31] will retry after 390.353931ms: waiting for machine to come up
	I1002 11:54:37.939830  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269286101s)
	I1002 11:54:37.939876  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.149675  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.246179  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.327794  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:38.327884  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.343240  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.855719  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.355428  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.854862  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.355228  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.855597  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.891530  384787 api_server.go:72] duration metric: took 2.563733499s to wait for apiserver process to appear ...
	I1002 11:54:40.891560  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:40.891581  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892226  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:40.892274  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892799  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:41.393747  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:41.113638  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:41.116930  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117360  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:41.117396  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117684  384965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:41.122622  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:41.138418  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:41.138496  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:41.189380  384965 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:41.189465  384965 ssh_runner.go:195] Run: which lz4
	I1002 11:54:41.194945  384965 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:41.200215  384965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:41.200254  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:43.164279  384965 crio.go:444] Took 1.969380 seconds to copy over tarball
	I1002 11:54:43.164370  384965 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:41.447247  384505 retry.go:31] will retry after 8.441231321s: kubelet not initialised
	I1002 11:54:41.600866  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.601691  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.601729  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.601345  385933 retry.go:31] will retry after 381.859851ms: waiting for machine to come up
	I1002 11:54:41.985107  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.986545  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.986572  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.986434  385933 retry.go:31] will retry after 606.51751ms: waiting for machine to come up
	I1002 11:54:42.594443  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:42.595004  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:42.595031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:42.594935  385933 retry.go:31] will retry after 474.689172ms: waiting for machine to come up
	I1002 11:54:43.071618  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:43.072140  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:43.072196  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:43.072085  385933 retry.go:31] will retry after 931.163736ms: waiting for machine to come up
	I1002 11:54:44.005228  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:44.005899  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:44.005927  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:44.005852  385933 retry.go:31] will retry after 1.133426769s: waiting for machine to come up
	I1002 11:54:45.141320  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:45.142068  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:45.142099  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:45.141965  385933 retry.go:31] will retry after 1.458717431s: waiting for machine to come up
	I1002 11:54:45.416658  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.416697  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.416713  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.489874  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.489918  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.893115  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.901437  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:45.901477  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.393114  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.399302  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:46.399337  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.892875  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.898524  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:54:46.908311  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:54:46.908342  384787 api_server.go:131] duration metric: took 6.016772427s to wait for apiserver health ...
	I1002 11:54:46.908354  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.908364  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:47.225292  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:47.481617  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:47.499011  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:47.535238  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:46.620757  384965 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.456345361s)
	I1002 11:54:46.620801  384965 crio.go:451] Took 3.456492 seconds to extract the tarball
	I1002 11:54:46.620814  384965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:46.677550  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:46.810235  384965 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:46.810265  384965 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:46.810334  384965 ssh_runner.go:195] Run: crio config
	I1002 11:54:46.875355  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.875378  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:46.875397  384965 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:46.875417  384965 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.251 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-777999 NodeName:default-k8s-diff-port-777999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:46.875588  384965 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.251
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-777999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:46.875674  384965 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-777999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 11:54:46.875737  384965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:46.886943  384965 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:46.887034  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:46.898434  384965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1002 11:54:46.917830  384965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:46.936297  384965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1002 11:54:46.954413  384965 ssh_runner.go:195] Run: grep 192.168.61.251	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:46.958832  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:46.970802  384965 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999 for IP: 192.168.61.251
	I1002 11:54:46.970845  384965 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:46.971031  384965 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:46.971093  384965 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:46.971194  384965 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/client.key
	I1002 11:54:46.971286  384965 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key.04d51ca9
	I1002 11:54:46.971341  384965 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key
	I1002 11:54:46.971469  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:46.971507  384965 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:46.971524  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:46.971572  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:46.971614  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:46.971652  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:46.971713  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:46.972319  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:46.998880  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:47.024639  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:47.048695  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:47.076815  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:47.102469  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:47.128913  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:47.155863  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:47.185058  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:47.212289  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:47.236848  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:47.261485  384965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:47.278535  384965 ssh_runner.go:195] Run: openssl version
	I1002 11:54:47.284888  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:47.296352  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301262  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301331  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.307136  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:47.317650  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:47.328371  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333341  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333421  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.339268  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:47.349646  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:47.360575  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367279  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367346  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.374693  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:47.386302  384965 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:47.391448  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:47.397407  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:47.403122  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:47.408810  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:47.414684  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:47.420606  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:47.426568  384965 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:47.426702  384965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:47.426747  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:47.467190  384965 cri.go:89] found id: ""
	I1002 11:54:47.467275  384965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:47.478921  384965 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:47.478944  384965 kubeadm.go:636] restartCluster start
	I1002 11:54:47.479016  384965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:47.492971  384965 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.494091  384965 kubeconfig.go:92] found "default-k8s-diff-port-777999" server: "https://192.168.61.251:8444"
	I1002 11:54:47.498738  384965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:47.510376  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.510454  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.523397  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.523417  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.523459  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.536893  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.037653  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.037746  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.055280  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.537887  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.537979  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.555759  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.037998  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.038108  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:46.602496  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:46.654672  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:46.654707  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:46.602962  385933 retry.go:31] will retry after 1.25268648s: waiting for machine to come up
	I1002 11:54:47.857506  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:47.858115  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:47.858149  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:47.858061  385933 retry.go:31] will retry after 2.104571101s: waiting for machine to come up
	I1002 11:54:49.964533  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:49.964997  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:49.965031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:49.964942  385933 retry.go:31] will retry after 2.047553587s: waiting for machine to come up
	I1002 11:54:47.766443  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:54:47.766485  384787 system_pods.go:61] "coredns-5dd5756b68-6glsj" [ad7c852a-cdac-4ada-99da-4115b447f00c] Running
	I1002 11:54:47.766498  384787 system_pods.go:61] "etcd-embed-certs-487027" [78f5c4ed-7baf-4339-811f-c25e934de0c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:54:47.766516  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [275bb65c-b955-43d9-839b-6439e8c19662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:54:47.766524  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [d798407e-abe2-4b70-952e-1274fff006bc] Running
	I1002 11:54:47.766532  384787 system_pods.go:61] "kube-proxy-wjjtv" [54e35e5e-7045-497f-8fef-322fe0e43afd] Running
	I1002 11:54:47.766543  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [62c61cf2-f18e-47a9-9729-20e87fe02c89] Running
	I1002 11:54:47.766556  384787 system_pods.go:61] "metrics-server-57f55c9bc5-d8c7b" [71c33b74-c942-403a-a1d4-2b852f0070a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:54:47.766568  384787 system_pods.go:61] "storage-provisioner" [0a8120e1-c879-4726-abab-f95a4a3c8721] Running
	I1002 11:54:47.766581  384787 system_pods.go:74] duration metric: took 231.314062ms to wait for pod list to return data ...
	I1002 11:54:47.766593  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:48.206673  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:48.206710  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:48.206722  384787 node_conditions.go:105] duration metric: took 440.12142ms to run NodePressure ...
	I1002 11:54:48.206743  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:48.736269  384787 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754061  384787 kubeadm.go:787] kubelet initialised
	I1002 11:54:48.754094  384787 kubeadm.go:788] duration metric: took 17.795803ms waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754106  384787 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:54:48.763480  384787 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:50.815900  384787 pod_ready.go:102] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:51.815729  384787 pod_ready.go:92] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:51.815752  384787 pod_ready.go:81] duration metric: took 3.052241738s waiting for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:51.815761  384787 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:49.055614  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.537412  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.537517  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:49.554838  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.037334  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.037460  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.050213  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.537454  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.537586  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.551733  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.037281  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.037394  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.055077  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.537591  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.537672  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.555315  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.037929  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.038038  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.052852  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.537358  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.537435  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.553169  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.037814  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.037913  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.055176  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.537764  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.537869  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.554864  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.037941  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.038052  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:49.895219  384505 retry.go:31] will retry after 9.020637322s: kubelet not initialised
	I1002 11:54:52.015240  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:52.015623  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:52.015646  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:52.015594  385933 retry.go:31] will retry after 3.361214112s: waiting for machine to come up
	I1002 11:54:55.378293  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:55.378805  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:55.378853  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:55.378772  385933 retry.go:31] will retry after 3.33521217s: waiting for machine to come up
	I1002 11:54:53.337930  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.337967  384787 pod_ready.go:81] duration metric: took 1.522199476s waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.337979  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344756  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.344782  384787 pod_ready.go:81] duration metric: took 6.79552ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344791  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:55.549698  384787 pod_ready.go:102] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:57.049146  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.049177  384787 pod_ready.go:81] duration metric: took 3.704379238s waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.049192  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055125  384787 pod_ready.go:92] pod "kube-proxy-wjjtv" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.055144  384787 pod_ready.go:81] duration metric: took 5.945156ms waiting for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055152  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:54.056234  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.537821  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.537918  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:54.552634  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.037141  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.037220  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.052963  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.537432  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.537531  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.552525  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.036986  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.037074  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.049750  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.537060  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.537144  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.548686  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.037931  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:57.038029  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:57.049828  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.511461  384965 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:57.511495  384965 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:57.511510  384965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:57.511571  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:57.552784  384965 cri.go:89] found id: ""
	I1002 11:54:57.552866  384965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:57.567867  384965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:57.578391  384965 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:57.578474  384965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587065  384965 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587086  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:57.717787  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.423038  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.607300  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.687023  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.778674  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:58.778770  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.794920  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.923574  384505 retry.go:31] will retry after 19.662203801s: kubelet not initialised
	I1002 11:54:58.715622  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716211  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has current primary IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716229  384344 main.go:141] libmachine: (no-preload-304121) Found IP for machine: 192.168.39.143
	I1002 11:54:58.716248  384344 main.go:141] libmachine: (no-preload-304121) Reserving static IP address...
	I1002 11:54:58.716781  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.716823  384344 main.go:141] libmachine: (no-preload-304121) Reserved static IP address: 192.168.39.143
	I1002 11:54:58.716845  384344 main.go:141] libmachine: (no-preload-304121) DBG | skip adding static IP to network mk-no-preload-304121 - found existing host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"}
	I1002 11:54:58.716864  384344 main.go:141] libmachine: (no-preload-304121) DBG | Getting to WaitForSSH function...
	I1002 11:54:58.716875  384344 main.go:141] libmachine: (no-preload-304121) Waiting for SSH to be available...
	I1002 11:54:58.719551  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.719991  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.720031  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.720236  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH client type: external
	I1002 11:54:58.720273  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa (-rw-------)
	I1002 11:54:58.720309  384344 main.go:141] libmachine: (no-preload-304121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:58.720329  384344 main.go:141] libmachine: (no-preload-304121) DBG | About to run SSH command:
	I1002 11:54:58.720355  384344 main.go:141] libmachine: (no-preload-304121) DBG | exit 0
	I1002 11:54:58.866583  384344 main.go:141] libmachine: (no-preload-304121) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:58.866916  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetConfigRaw
	I1002 11:54:58.867637  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:58.870844  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871270  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.871305  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871677  384344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:54:58.871886  384344 machine.go:88] provisioning docker machine ...
	I1002 11:54:58.871906  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:58.872159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872343  384344 buildroot.go:166] provisioning hostname "no-preload-304121"
	I1002 11:54:58.872370  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:58.875795  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876215  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.876252  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876420  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:58.876592  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876766  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876935  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:58.877113  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:58.877512  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:58.877528  384344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-304121 && echo "no-preload-304121" | sudo tee /etc/hostname
	I1002 11:54:59.032306  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-304121
	
	I1002 11:54:59.032336  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.035842  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036373  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.036412  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036749  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.036953  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037145  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037313  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.037564  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.038035  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.038064  384344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-304121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-304121/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-304121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:59.175880  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:59.175910  384344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:59.175933  384344 buildroot.go:174] setting up certificates
	I1002 11:54:59.175945  384344 provision.go:83] configureAuth start
	I1002 11:54:59.175957  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:59.176253  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:59.179169  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179541  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.179577  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179797  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.182011  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182418  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.182451  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182653  384344 provision.go:138] copyHostCerts
	I1002 11:54:59.182718  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:59.182732  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:59.182807  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:59.182919  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:59.182931  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:59.182963  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:59.183050  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:59.183060  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:59.183088  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:59.183174  384344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.no-preload-304121 san=[192.168.39.143 192.168.39.143 localhost 127.0.0.1 minikube no-preload-304121]
	I1002 11:54:59.492171  384344 provision.go:172] copyRemoteCerts
	I1002 11:54:59.492239  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:59.492266  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.495249  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495698  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.495746  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495900  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.496143  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.496299  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.496460  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:54:59.594538  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 11:54:59.625319  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:54:59.652745  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:59.676895  384344 provision.go:86] duration metric: configureAuth took 500.931279ms
	I1002 11:54:59.676930  384344 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:59.677160  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:59.677259  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.680393  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.680730  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.680764  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.681190  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.681491  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681698  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681875  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.682112  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.682651  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.682684  384344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:55:00.029184  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:55:00.029213  384344 machine.go:91] provisioned docker machine in 1.157312136s
	I1002 11:55:00.029226  384344 start.go:300] post-start starting for "no-preload-304121" (driver="kvm2")
	I1002 11:55:00.029240  384344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:55:00.029296  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.029683  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:55:00.029722  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.032977  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033456  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.033488  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033677  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.033919  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.034136  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.034351  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.137946  384344 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:55:00.144169  384344 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:55:00.144209  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:55:00.144291  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:55:00.144405  384344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:55:00.144609  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:55:00.157898  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:00.186547  384344 start.go:303] post-start completed in 157.300734ms
	I1002 11:55:00.186580  384344 fix.go:56] fixHost completed within 20.691216247s
	I1002 11:55:00.186609  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.189905  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190374  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.190411  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190718  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.190940  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.191494  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:55:00.191981  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:55:00.191996  384344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:55:00.328123  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247700.270150690
	
	I1002 11:55:00.328155  384344 fix.go:206] guest clock: 1696247700.270150690
	I1002 11:55:00.328166  384344 fix.go:219] Guest: 2023-10-02 11:55:00.27015069 +0000 UTC Remote: 2023-10-02 11:55:00.186584697 +0000 UTC m=+358.877281851 (delta=83.565993ms)
	I1002 11:55:00.328193  384344 fix.go:190] guest clock delta is within tolerance: 83.565993ms
	I1002 11:55:00.328207  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 20.832874678s
	I1002 11:55:00.328234  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.328584  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:00.331898  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332432  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.332468  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332651  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333263  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333480  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333586  384344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:55:00.333647  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.333895  384344 ssh_runner.go:195] Run: cat /version.json
	I1002 11:55:00.333943  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.336673  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.336920  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337021  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337083  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337207  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337399  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.337487  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337518  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.337642  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337734  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.337835  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.338131  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.338307  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.427708  384344 ssh_runner.go:195] Run: systemctl --version
	I1002 11:55:00.456367  384344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:55:00.604389  384344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:55:00.612859  384344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:55:00.612968  384344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:55:00.627986  384344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:55:00.628056  384344 start.go:469] detecting cgroup driver to use...
	I1002 11:55:00.628128  384344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:55:00.643670  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:55:00.656987  384344 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:55:00.657058  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:55:00.669708  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:55:00.682586  384344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:55:00.790044  384344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:55:00.913634  384344 docker.go:213] disabling docker service ...
	I1002 11:55:00.913717  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:55:00.926496  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:55:00.938769  384344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:55:01.045413  384344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:55:01.169133  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:55:01.182168  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:55:01.201850  384344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:55:01.201926  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.214874  384344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:55:01.214972  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.225123  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.237560  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.247898  384344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:55:01.260797  384344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:55:01.271528  384344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:55:01.271602  384344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:55:01.285906  384344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:55:01.297623  384344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:55:01.429828  384344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:55:01.617340  384344 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:55:01.617486  384344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:55:01.622871  384344 start.go:537] Will wait 60s for crictl version
	I1002 11:55:01.622942  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:01.627257  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:55:01.674032  384344 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:55:01.674130  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.726822  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.777433  384344 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:59.549254  384787 pod_ready.go:102] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:01.550493  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:01.550524  384787 pod_ready.go:81] duration metric: took 4.495364436s waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:01.550537  384787 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:59.310529  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:59.811582  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.310859  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.810518  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.311217  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.336761  384965 api_server.go:72] duration metric: took 2.55808678s to wait for apiserver process to appear ...
	I1002 11:55:01.336793  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:01.336814  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:01.778891  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:01.781741  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782048  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:01.782088  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782334  384344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:55:01.787047  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:01.803390  384344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:55:01.803482  384344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:55:01.853839  384344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:55:01.853868  384344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:55:01.853954  384344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.853966  384344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.854164  384344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.854189  384344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.854254  384344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.854169  384344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:01.854325  384344 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 11:55:01.854171  384344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855315  384344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855339  384344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.855355  384344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.855841  384344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.855856  384344 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.855815  384344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001299  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.002150  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1002 11:55:02.004275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.007591  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.028882  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.199630  384344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1002 11:55:02.199751  384344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.199678  384344 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1002 11:55:02.199838  384344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.199866  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199890  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199707  384344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1002 11:55:02.199951  384344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.199981  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305560  384344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1002 11:55:02.305618  384344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.305670  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305721  384344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1002 11:55:02.305784  384344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.305826  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305853  384344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1002 11:55:02.305893  384344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.305934  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305943  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.305999  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.306035  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.403560  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.403701  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1002 11:55:02.403791  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.403861  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.403983  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 11:55:02.404056  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:02.404148  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 11:55:02.404200  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:02.404274  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.512787  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 11:55:02.512909  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:02.513038  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1002 11:55:02.513062  384344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513091  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513169  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.2 (exists)
	I1002 11:55:02.513217  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 11:55:02.513258  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:02.513292  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1002 11:55:02.513343  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 11:55:02.513399  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:02.519549  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.2 (exists)
	I1002 11:55:02.529685  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.2 (exists)
	I1002 11:55:02.739233  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:03.573767  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:05.577137  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:07.577690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:06.191660  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.191697  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.191711  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.268234  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.268270  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.769081  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.775235  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:06.775267  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.268848  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.289255  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:07.289294  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.769010  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.776315  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:55:07.785543  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:07.785578  384965 api_server.go:131] duration metric: took 6.448776132s to wait for apiserver health ...
	I1002 11:55:07.785620  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:55:07.785630  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:07.963339  384965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:07.965036  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:08.003261  384965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:08.072023  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:08.084616  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:08.084657  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:08.084670  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:08.084680  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:08.084693  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:08.084709  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:08.084723  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:08.084737  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:08.084752  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:08.084767  384965 system_pods.go:74] duration metric: took 12.715919ms to wait for pod list to return data ...
	I1002 11:55:08.084783  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:08.089289  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:08.089323  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:08.089337  384965 node_conditions.go:105] duration metric: took 4.548285ms to run NodePressure ...
	I1002 11:55:08.089359  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:08.496528  384965 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509299  384965 kubeadm.go:787] kubelet initialised
	I1002 11:55:08.509331  384965 kubeadm.go:788] duration metric: took 12.771905ms waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509343  384965 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:08.516124  384965 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.528838  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.528938  384965 pod_ready.go:81] duration metric: took 12.780895ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.528967  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.529001  384965 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.534830  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534867  384965 pod_ready.go:81] duration metric: took 5.838075ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.534882  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534892  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.549854  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549885  384965 pod_ready.go:81] duration metric: took 14.983531ms waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.549900  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549913  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.559230  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559313  384965 pod_ready.go:81] duration metric: took 9.38728ms waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.559335  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559347  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.900163  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900190  384965 pod_ready.go:81] duration metric: took 340.83496ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.900199  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900208  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.516054  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516096  384965 pod_ready.go:81] duration metric: took 615.877294ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.516112  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516121  384965 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.701735  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701764  384965 pod_ready.go:81] duration metric: took 185.632721ms waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.701775  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701782  384965 pod_ready.go:38] duration metric: took 1.192428133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:09.701800  384965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:55:09.715441  384965 ops.go:34] apiserver oom_adj: -16
	I1002 11:55:09.715471  384965 kubeadm.go:640] restartCluster took 22.236518554s
	I1002 11:55:09.715483  384965 kubeadm.go:406] StartCluster complete in 22.288924118s
	I1002 11:55:09.715506  384965 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.715603  384965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:55:09.717604  384965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.832925  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:55:09.832958  384965 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:55:09.833045  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:55:09.833070  384965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833078  384965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833081  384965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833097  384965 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.833106  384965 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:55:09.833106  384965 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:09.833108  384965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-777999"
	W1002 11:55:09.833125  384965 addons.go:240] addon metrics-server should already be in state true
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833570  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833592  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833615  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833624  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833634  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833646  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.839134  384965 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-777999" context rescaled to 1 replicas
	I1002 11:55:09.839204  384965 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:55:09.882782  384965 out.go:177] * Verifying Kubernetes components...
	I1002 11:55:09.852478  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1002 11:55:09.853164  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1002 11:55:09.853212  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1002 11:55:09.884413  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:55:09.884847  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884862  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884978  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.885450  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885473  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885590  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885616  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885875  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885905  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.885931  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885991  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886291  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.886608  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886609  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886643  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.886650  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.890816  384965 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.890840  384965 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:55:09.890874  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.891346  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.891381  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.905399  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1002 11:55:09.905472  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1002 11:55:09.905949  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906013  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906516  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906548  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.906616  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906638  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.907044  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907050  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907204  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907296  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907802  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1002 11:55:09.908797  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.909184  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.911200  384965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:55:09.909554  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.909557  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.913028  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.913040  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:55:09.913097  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:55:09.913128  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.914961  384965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102329  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.589219551s)
	I1002 11:55:10.102369  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1002 11:55:10.102405  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102437  384344 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (7.58915139s)
	I1002 11:55:10.102467  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.2 (exists)
	I1002 11:55:10.102468  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102517  384344 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (7.363200276s)
	I1002 11:55:10.102554  384344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 11:55:10.102587  384344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102639  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:10.107376  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:09.913417  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.916644  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.916734  384965 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:09.916751  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:55:09.916773  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.917177  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.917217  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.917938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.917968  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.918238  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.918494  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.918725  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.919087  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.920001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920470  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.920499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920702  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.920898  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.921037  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.921164  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.936676  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1002 11:55:09.937243  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.937814  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.937838  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.938269  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.938503  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.940662  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.940930  384965 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:09.940952  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:55:09.940975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.944168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.944929  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.944938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.944972  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.945129  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.945323  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.945464  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:10.048027  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:10.064428  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:55:10.064457  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:55:10.113892  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:55:10.113922  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:55:10.162803  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:10.203352  384965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:10.203377  384965 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:55:10.209916  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:10.209945  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:55:10.283168  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:11.838556  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.790470973s)
	I1002 11:55:11.838584  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.675739061s)
	I1002 11:55:11.838618  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838620  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838659  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838635  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838886  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555664753s)
	I1002 11:55:11.838941  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838954  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.838980  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.838992  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838961  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839104  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839139  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839157  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839170  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839303  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839369  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.839409  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839421  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839431  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839688  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839700  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839710  384965 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:11.841889  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.841915  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.842201  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842253  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.842259  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842269  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.849511  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.849529  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.849874  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.849878  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.849901  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.853656  384965 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1002 11:55:10.075236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:12.576161  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:11.855303  384965 addons.go:502] enable addons completed in 2.022363817s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1002 11:55:12.217572  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:12.931492  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2: (2.828987001s)
	I1002 11:55:12.931534  384344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.824127868s)
	I1002 11:55:12.931594  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:55:12.931539  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 from cache
	I1002 11:55:12.931660  384344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931718  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931728  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:12.939018  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1002 11:55:14.293770  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362024408s)
	I1002 11:55:14.293812  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1002 11:55:14.293844  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:14.293919  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:15.843943  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2: (1.549996136s)
	I1002 11:55:15.843970  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 from cache
	I1002 11:55:15.843995  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.844044  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.077109  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:17.575669  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:14.219000  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:16.717611  384965 node_ready.go:49] node "default-k8s-diff-port-777999" has status "Ready":"True"
	I1002 11:55:16.717639  384965 node_ready.go:38] duration metric: took 6.514250616s waiting for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:16.717652  384965 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:16.724331  384965 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242058  384965 pod_ready.go:92] pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.242084  384965 pod_ready.go:81] duration metric: took 517.728305ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242093  384965 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247916  384965 pod_ready.go:92] pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.247946  384965 pod_ready.go:81] duration metric: took 5.844733ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247960  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.596133  384505 kubeadm.go:787] kubelet initialised
	I1002 11:55:18.596163  384505 kubeadm.go:788] duration metric: took 48.489169583s waiting for restarted kubelet to initialise ...
	I1002 11:55:18.596173  384505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:18.603606  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612080  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.612112  384505 pod_ready.go:81] duration metric: took 8.472159ms waiting for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612124  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618116  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.618147  384505 pod_ready.go:81] duration metric: took 6.014635ms waiting for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618159  384505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624120  384505 pod_ready.go:92] pod "etcd-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.624148  384505 pod_ready.go:81] duration metric: took 5.979959ms waiting for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624162  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631373  384505 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.631404  384505 pod_ready.go:81] duration metric: took 7.233318ms waiting for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631418  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990560  384505 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.990593  384505 pod_ready.go:81] duration metric: took 359.165649ms waiting for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990608  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.708531  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2: (1.864455947s)
	I1002 11:55:17.708567  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 from cache
	I1002 11:55:17.708616  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:17.708669  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:20.492385  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2: (2.783683562s)
	I1002 11:55:20.492427  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 from cache
	I1002 11:55:20.492455  384344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:20.492508  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:19.575875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:22.075666  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.526494  384965 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.526525  384965 pod_ready.go:81] duration metric: took 2.278556042s waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.526542  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927586  384965 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:20.927626  384965 pod_ready.go:81] duration metric: took 1.401074339s waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927641  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117907  384965 pod_ready.go:92] pod "kube-proxy-gchnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.117943  384965 pod_ready.go:81] duration metric: took 190.292051ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117957  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517768  384965 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.517788  384965 pod_ready.go:81] duration metric: took 399.822591ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517800  384965 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:23.829704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.390560  384505 pod_ready.go:92] pod "kube-proxy-gkhxb" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.390588  384505 pod_ready.go:81] duration metric: took 399.970888ms waiting for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.390602  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791405  384505 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.791443  384505 pod_ready.go:81] duration metric: took 400.826662ms waiting for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791458  384505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:22.098383  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:24.098434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:21.439323  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 11:55:21.439378  384344 cache_images.go:123] Successfully loaded all cached images
	I1002 11:55:21.439386  384344 cache_images.go:92] LoadImages completed in 19.585504619s
	I1002 11:55:21.439504  384344 ssh_runner.go:195] Run: crio config
	I1002 11:55:21.510657  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:21.510683  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:21.510703  384344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:55:21.510734  384344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-304121 NodeName:no-preload-304121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:55:21.511445  384344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-304121"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:55:21.511576  384344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-304121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:55:21.511643  384344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:55:21.522719  384344 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:55:21.522788  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:55:21.531557  384344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 11:55:21.548551  384344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:55:21.565791  384344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1002 11:55:21.583240  384344 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1002 11:55:21.587268  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:21.600487  384344 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121 for IP: 192.168.39.143
	I1002 11:55:21.600520  384344 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:21.600663  384344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:55:21.600697  384344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:55:21.600794  384344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/client.key
	I1002 11:55:21.600873  384344 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key.62e94479
	I1002 11:55:21.600926  384344 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key
	I1002 11:55:21.601033  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:55:21.601061  384344 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:55:21.601071  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:55:21.601093  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:55:21.601118  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:55:21.601146  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:55:21.601182  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:21.601818  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:55:21.626860  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:55:21.650402  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:55:21.678876  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:55:21.704351  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:55:21.729385  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:55:21.755185  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:55:21.779149  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:55:21.802775  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:55:21.825691  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:55:21.849575  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:55:21.872777  384344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:55:21.890629  384344 ssh_runner.go:195] Run: openssl version
	I1002 11:55:21.896382  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:55:21.906415  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911134  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911202  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.916782  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:55:21.926770  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:55:21.936394  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940874  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940944  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.946542  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:55:21.956590  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:55:21.966128  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971092  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971144  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.976625  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:55:21.987142  384344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:55:21.991548  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:55:21.998311  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:55:22.004302  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:55:22.010267  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:55:22.016280  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:55:22.022273  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:55:22.027921  384344 kubeadm.go:404] StartCluster: {Name:no-preload-304121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:55:22.028050  384344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:55:22.028141  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:22.068066  384344 cri.go:89] found id: ""
	I1002 11:55:22.068147  384344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:55:22.079381  384344 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:55:22.079406  384344 kubeadm.go:636] restartCluster start
	I1002 11:55:22.079471  384344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:55:22.088977  384344 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.090087  384344 kubeconfig.go:92] found "no-preload-304121" server: "https://192.168.39.143:8443"
	I1002 11:55:22.093401  384344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:55:22.103315  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.103378  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.114520  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.114538  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.114586  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.126040  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.626326  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.626438  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.637215  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.126863  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.126967  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.138035  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.626453  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.639113  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.126445  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.126541  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.139561  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.626423  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.626534  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.638442  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.127011  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.127103  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.139945  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.626451  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.638919  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:26.126459  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.126551  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.140068  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.574146  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.574656  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.329321  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.329400  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.098690  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.098837  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.626344  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.626445  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.641274  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.126886  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.126965  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.139451  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.627110  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.627264  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.640675  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.126212  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.126301  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.140048  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.626433  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.626530  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.639683  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.127030  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.127142  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.139681  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.626803  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.626878  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.639468  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.127126  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.127231  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.140930  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.626441  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.626535  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.639070  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:31.126421  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.126503  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.138724  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.074607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.830079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.832350  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.099074  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.596870  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.627189  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.627281  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.640362  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:32.104121  384344 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:55:32.104153  384344 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:55:32.104169  384344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:55:32.104223  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:32.147672  384344 cri.go:89] found id: ""
	I1002 11:55:32.147756  384344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:55:32.164049  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:55:32.174941  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:55:32.175041  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185756  384344 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185783  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:32.328093  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.120678  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.341378  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.433591  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.518381  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:55:33.518458  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:33.530334  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.043021  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.542602  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.042825  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.542484  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.042547  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.067551  384344 api_server.go:72] duration metric: took 2.549193903s to wait for apiserver process to appear ...
	I1002 11:55:36.067574  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:36.067593  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:33.076598  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.077561  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.575927  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.328950  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.330925  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:34.598649  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:36.598851  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.099902  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:40.195285  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.195318  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.195330  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.261287  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.261324  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.762016  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.776249  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:40.776279  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.262027  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.277940  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:41.277971  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.762404  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.767751  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 11:55:41.775963  384344 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:41.775988  384344 api_server.go:131] duration metric: took 5.708406738s to wait for apiserver health ...
	I1002 11:55:41.775997  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:41.776003  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:41.777791  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:40.076215  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.574607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.831982  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.330541  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.599812  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.097139  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.779495  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:41.796340  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:41.838383  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:41.863561  384344 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:41.863600  384344 system_pods.go:61] "coredns-5dd5756b68-hn8bw" [f388b655-7f90-436d-a1fd-458f22c7f5e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:41.863612  384344 system_pods.go:61] "etcd-no-preload-304121" [b45507da-d57a-45f5-82a3-37b273c42747] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:41.863621  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [7f8cdde0-5050-4cea-87c5-56bd0a5d623b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:41.863630  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [24d40a92-d549-48c8-bf5f-983fdc15dcae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:41.863641  384344 system_pods.go:61] "kube-proxy-cwvr7" [9e3f08e6-92ad-4ebc-afe3-44d5ab81a63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:41.863651  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [cc3c6828-f829-416a-9cfd-ddcc0f485578] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:41.863665  384344 system_pods.go:61] "metrics-server-57f55c9bc5-lrqt9" [7b70c72d-06b3-40ae-8e0c-ea4794cfe47b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:41.863682  384344 system_pods.go:61] "storage-provisioner" [457608a4-5ba9-45d2-841e-889930ce6bd7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:41.863694  384344 system_pods.go:74] duration metric: took 25.279676ms to wait for pod list to return data ...
	I1002 11:55:41.863707  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:41.870534  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:41.870580  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:41.870636  384344 node_conditions.go:105] duration metric: took 6.921999ms to run NodePressure ...
	I1002 11:55:41.870666  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:42.164858  384344 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169831  384344 kubeadm.go:787] kubelet initialised
	I1002 11:55:42.169855  384344 kubeadm.go:788] duration metric: took 4.969744ms waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169864  384344 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:42.176338  384344 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.195428  384344 pod_ready.go:102] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.195763  384344 pod_ready.go:92] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:46.195786  384344 pod_ready.go:81] duration metric: took 4.019424872s waiting for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:46.195795  384344 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.581249  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:47.074875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.331120  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.833248  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.099661  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.599051  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.217529  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:50.218641  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.575639  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.074550  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.329627  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.330613  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.330666  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.098233  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.098464  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.717990  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.716716  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:53.716751  384344 pod_ready.go:81] duration metric: took 7.520948071s waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:53.716769  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738808  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.738832  384344 pod_ready.go:81] duration metric: took 1.022054915s waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738841  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.743979  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.743997  384344 pod_ready.go:81] duration metric: took 5.14952ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.744006  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749813  384344 pod_ready.go:92] pod "kube-proxy-cwvr7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.749843  384344 pod_ready.go:81] duration metric: took 5.828956ms waiting for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749855  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913811  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.913840  384344 pod_ready.go:81] duration metric: took 163.97545ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913853  384344 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.075263  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:56.574518  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.829643  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:58.328816  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.597512  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.598176  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.221008  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.221092  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.221270  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.075344  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.576898  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:00.330184  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.332041  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.599606  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.098251  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.098441  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.222251  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:05.721050  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.577043  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.075021  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.829434  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.830586  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.830689  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.100229  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.597399  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:07.725911  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.222275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.574907  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:11.075011  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.831040  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.330226  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.599336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.601338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.721538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:14.732864  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.075225  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.575267  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.831410  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.328821  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.098085  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.598406  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.220843  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:19.221812  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.074885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.575220  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.830090  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.329239  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.108397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:22.597329  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:21.723316  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.220817  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:26.222858  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.075276  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.574332  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.574872  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.330095  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.831991  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.598737  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.098098  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:28.721424  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.721466  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.074535  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.075748  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.330155  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.830009  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:29.597397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:31.598389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.598490  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.223521  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.719548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:34.575020  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.074654  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.331567  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.832286  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.598829  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.599403  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.722451  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.223547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:39.075433  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:41.575885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.329838  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.330038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.099862  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.598269  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.723887  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.221944  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.075128  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.075540  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.331960  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.829987  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.097469  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.098616  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.222108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.721938  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:48.589935  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.074993  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.331749  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.830280  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.830731  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.598433  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.097486  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.098228  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.222646  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.726547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.076322  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:55.575236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.329005  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.330077  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.598418  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.098019  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:57.221753  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.721824  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.074481  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.576860  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.831342  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.328695  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:01.598124  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.098241  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:02.221634  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.222422  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.075152  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.076964  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.577621  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.328811  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.329223  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.598041  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.097384  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.724181  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.221108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.223407  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:10.077910  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:12.574292  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.331559  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.828655  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.829065  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.098632  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.099363  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.721785  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.222201  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:14.574467  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.576124  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.829618  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:17.830298  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.598739  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.097854  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.722947  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.220868  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:19.074608  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.079563  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.329680  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.335299  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.109847  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.598994  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.221458  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.222249  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.575662  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.075111  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:24.829500  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.830678  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.099426  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.598577  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.721159  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.725949  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:28.574416  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.576031  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.330079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:31.330829  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.829243  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.098615  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.598161  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.220933  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.720190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.075330  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.075824  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.574487  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.829585  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:38.333997  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.598838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.098682  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:36.723779  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.222751  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.074293  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:42.574665  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.829324  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.329265  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.598047  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.598338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:44.097421  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.720538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.721398  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.220972  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.074832  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.573962  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.330175  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.829115  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.097496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.098108  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.221977  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.222810  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.576755  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.076442  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.829764  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.330051  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.099771  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.599534  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.223223  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.721544  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.574341  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.574466  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.829215  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.829468  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.829730  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:55.097141  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.598230  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.221854  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.721190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.830156  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.329206  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.599838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:02.097630  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.099434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:01.724512  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.223282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.076896  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.576101  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.330313  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:07.830038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.597389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.098677  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.721370  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.723225  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.224608  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.076078  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:10.574982  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.575115  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.832412  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.330220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.597760  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.598933  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.726487  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.220404  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.575310  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.576156  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.330536  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.829762  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.833076  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.099600  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.599713  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.222118  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:20.722548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:19.076690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.575073  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.330604  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.829742  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.099777  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.598614  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.220183  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.221895  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.575355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.575510  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.830538  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.329783  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:26.097290  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.097568  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:27.722661  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.221305  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.074457  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.074944  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.075905  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.831228  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:33.328903  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.098502  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.599120  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.221445  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.224133  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.075953  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.574997  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.330632  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.830117  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.101830  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.597886  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.722453  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:38.722619  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.725507  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.077321  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.574812  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.329004  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:42.329704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.598243  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.600336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.098496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.225247  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:45.721116  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.073774  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.830119  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.330229  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.101053  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.597255  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.724301  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.220275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.074634  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.075498  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.576147  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:49.829149  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.328994  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.598113  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:53.096876  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.224282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.721074  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.576355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.074445  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.330474  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.331220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.829693  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:55.098655  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.598659  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.721698  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.721958  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.222685  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:59.074760  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.076178  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.551409  384787 pod_ready.go:81] duration metric: took 4m0.000833874s waiting for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:01.551453  384787 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:01.551481  384787 pod_ready.go:38] duration metric: took 4m12.797362192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:01.551549  384787 kubeadm.go:640] restartCluster took 4m35.116019688s
	W1002 11:59:01.551687  384787 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:01.551757  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:00.830381  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.830963  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:00.103080  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.600662  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:03.720777  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.722315  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.330034  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.835944  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.098121  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.098246  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:09.099171  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.725245  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.221073  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.328885  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:12.331198  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:11.599122  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.099609  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.268063  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.716271748s)
	I1002 11:59:15.268160  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:15.282632  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:15.294231  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:15.305847  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:15.305892  384787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:59:15.365627  384787 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:59:15.365703  384787 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:15.546049  384787 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:15.546175  384787 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:15.546300  384787 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:15.810889  384787 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:12.221147  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.222293  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.223901  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.813908  384787 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:15.814079  384787 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:15.814178  384787 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:15.814257  384787 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:15.814309  384787 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:15.814451  384787 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:15.814528  384787 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:15.814874  384787 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:15.815489  384787 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:15.816067  384787 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:15.816586  384787 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:15.817099  384787 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:15.817161  384787 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:15.988485  384787 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:16.038665  384787 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:16.218038  384787 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:16.415133  384787 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:16.415531  384787 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:16.418000  384787 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:16.420952  384787 out.go:204]   - Booting up control plane ...
	I1002 11:59:16.421147  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:16.421273  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:16.423255  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:16.442699  384787 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:16.443964  384787 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:16.444055  384787 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:59:16.602169  384787 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:14.331978  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.830188  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.831449  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.597731  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.598683  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.722865  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.222671  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.329396  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.518315  384965 pod_ready.go:81] duration metric: took 4m0.000482629s waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:21.518363  384965 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:21.518378  384965 pod_ready.go:38] duration metric: took 4m4.800712941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:21.518406  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:21.518451  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:21.518519  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:21.587182  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:21.587210  384965 cri.go:89] found id: ""
	I1002 11:59:21.587221  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:21.587285  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.592996  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:21.593072  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:21.635267  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:21.635293  384965 cri.go:89] found id: ""
	I1002 11:59:21.635306  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:21.635367  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.640347  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:21.640428  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:21.686113  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:21.686146  384965 cri.go:89] found id: ""
	I1002 11:59:21.686157  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:21.686224  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.691867  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:21.691959  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:21.745210  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:21.745245  384965 cri.go:89] found id: ""
	I1002 11:59:21.745257  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:21.745330  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.750774  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:21.750862  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:21.810054  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:21.810084  384965 cri.go:89] found id: ""
	I1002 11:59:21.810099  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:21.810161  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.815433  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:21.815518  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:21.858759  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:21.858794  384965 cri.go:89] found id: ""
	I1002 11:59:21.858807  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:21.858887  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.864818  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:21.864900  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:21.920312  384965 cri.go:89] found id: ""
	I1002 11:59:21.920343  384965 logs.go:284] 0 containers: []
	W1002 11:59:21.920353  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:21.920362  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:21.920429  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:21.964677  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:21.964708  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:21.964715  384965 cri.go:89] found id: ""
	I1002 11:59:21.964724  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:21.964812  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.970514  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.976118  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:21.976158  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:22.026289  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:22.026337  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:22.094330  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:22.094389  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:22.133879  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:22.133911  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:22.186645  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:22.186688  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:22.200091  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:22.200132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:22.245383  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:22.245420  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:22.312167  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:22.312212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:22.358596  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:22.358631  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:22.417643  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:22.417695  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:22.467793  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:22.467830  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:22.509173  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:22.509216  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:23.037502  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:23.037554  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:19.792274  384505 pod_ready.go:81] duration metric: took 4m0.000796599s waiting for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:19.792309  384505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:19.792337  384505 pod_ready.go:38] duration metric: took 4m1.196150969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:19.792389  384505 kubeadm.go:640] restartCluster took 5m11.202020009s
	W1002 11:59:19.792478  384505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:19.792509  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:24.926525  384505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.133982838s)
	I1002 11:59:24.926616  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:24.943054  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:24.953201  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:24.963105  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:24.963158  384505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 11:59:25.027860  384505 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1002 11:59:25.027986  384505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:25.214224  384505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:25.214399  384505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:25.214529  384505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:25.472019  384505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:25.472706  384505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:25.481965  384505 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1002 11:59:25.630265  384505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:25.105120  384787 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502545 seconds
	I1002 11:59:25.105321  384787 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:25.124191  384787 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:25.659886  384787 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:25.660110  384787 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-487027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:59:26.180742  384787 kubeadm.go:322] [bootstrap-token] Using token: tg9u90.7q86afgrs7pieyop
	I1002 11:59:23.723485  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:25.724673  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:26.182574  384787 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:26.182738  384787 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:26.190559  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:59:26.200659  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:26.212391  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:26.217946  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:26.226534  384787 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:26.248000  384787 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:59:26.545226  384787 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:26.604475  384787 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:26.605636  384787 kubeadm.go:322] 
	I1002 11:59:26.605726  384787 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:26.605738  384787 kubeadm.go:322] 
	I1002 11:59:26.605810  384787 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:26.605815  384787 kubeadm.go:322] 
	I1002 11:59:26.605844  384787 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:26.605914  384787 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:26.605973  384787 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:26.605981  384787 kubeadm.go:322] 
	I1002 11:59:26.606052  384787 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:59:26.606058  384787 kubeadm.go:322] 
	I1002 11:59:26.606097  384787 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:59:26.606101  384787 kubeadm.go:322] 
	I1002 11:59:26.606143  384787 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:26.606203  384787 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:26.606263  384787 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:26.606267  384787 kubeadm.go:322] 
	I1002 11:59:26.606334  384787 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:59:26.606438  384787 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:26.606446  384787 kubeadm.go:322] 
	I1002 11:59:26.606580  384787 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.606732  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:26.606764  384787 kubeadm.go:322] 	--control-plane 
	I1002 11:59:26.606773  384787 kubeadm.go:322] 
	I1002 11:59:26.606906  384787 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:26.606919  384787 kubeadm.go:322] 
	I1002 11:59:26.607066  384787 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.607192  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:26.608470  384787 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:26.608503  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:59:26.608547  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:26.610426  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:25.632074  384505 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:25.632197  384505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:25.632294  384505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:25.632398  384505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:25.632546  384505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:25.632693  384505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:25.633319  384505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:25.633417  384505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:25.633720  384505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:25.634302  384505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:25.635341  384505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:25.635391  384505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:25.635461  384505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:25.743684  384505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:25.940709  384505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:26.418951  384505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:26.676172  384505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:26.677698  384505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:26.612002  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:26.646809  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:26.709486  384787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:26.709648  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.709720  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=embed-certs-487027 minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.778472  384787 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:27.199359  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:27.351099  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:25.716079  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:25.739754  384965 api_server.go:72] duration metric: took 4m15.900505961s to wait for apiserver process to appear ...
	I1002 11:59:25.739785  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:25.739834  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:25.739904  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:25.788719  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:25.788747  384965 cri.go:89] found id: ""
	I1002 11:59:25.788758  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:25.788824  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.794426  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:25.794500  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:25.836689  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:25.836721  384965 cri.go:89] found id: ""
	I1002 11:59:25.836731  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:25.836808  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.841671  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:25.841744  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:25.883947  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:25.883976  384965 cri.go:89] found id: ""
	I1002 11:59:25.883986  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:25.884049  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.892631  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:25.892758  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:25.966469  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:25.966502  384965 cri.go:89] found id: ""
	I1002 11:59:25.966514  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:25.966575  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.971814  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:25.971890  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:26.020970  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.021002  384965 cri.go:89] found id: ""
	I1002 11:59:26.021013  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:26.021076  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.025582  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:26.025657  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:26.077339  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.077371  384965 cri.go:89] found id: ""
	I1002 11:59:26.077383  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:26.077448  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.082311  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:26.082396  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:26.126803  384965 cri.go:89] found id: ""
	I1002 11:59:26.126833  384965 logs.go:284] 0 containers: []
	W1002 11:59:26.126843  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:26.126851  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:26.126992  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:26.176829  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.176858  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.176866  384965 cri.go:89] found id: ""
	I1002 11:59:26.176876  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:26.176945  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.182892  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.189288  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:26.189316  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.257856  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:26.257910  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.297691  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:26.297747  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:26.351211  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:26.351254  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:26.425373  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:26.425416  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:26.568944  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:26.568985  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.627406  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:26.627449  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:26.641249  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:26.641281  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:26.696939  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:26.696974  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.744365  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:26.744406  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:27.279579  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:27.279639  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:27.366447  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:27.366508  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:27.436429  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:27.436476  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:26.679464  384505 out.go:204]   - Booting up control plane ...
	I1002 11:59:26.679594  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:26.688060  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:26.700892  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:26.702245  384505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:26.706277  384505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:28.222692  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:30.223561  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:27.973079  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.472938  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.973900  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.473650  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.972984  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.473216  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.973931  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.474026  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.973024  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:32.473723  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.989828  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:59:29.995664  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:59:29.998819  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:29.998846  384965 api_server.go:131] duration metric: took 4.25905343s to wait for apiserver health ...
	I1002 11:59:29.998855  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:29.998882  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:29.998944  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:30.037898  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.037925  384965 cri.go:89] found id: ""
	I1002 11:59:30.037935  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:30.038014  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.042751  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:30.042835  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:30.085339  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.085378  384965 cri.go:89] found id: ""
	I1002 11:59:30.085390  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:30.085463  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.090184  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:30.090265  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:30.130574  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.130602  384965 cri.go:89] found id: ""
	I1002 11:59:30.130611  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:30.130665  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.135040  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:30.135125  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:30.178044  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:30.178067  384965 cri.go:89] found id: ""
	I1002 11:59:30.178078  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:30.178144  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.182586  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:30.182662  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:30.226121  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:30.226142  384965 cri.go:89] found id: ""
	I1002 11:59:30.226152  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:30.226209  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.231080  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:30.231156  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:30.275499  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.275533  384965 cri.go:89] found id: ""
	I1002 11:59:30.275545  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:30.275611  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.281023  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:30.281089  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:30.325580  384965 cri.go:89] found id: ""
	I1002 11:59:30.325610  384965 logs.go:284] 0 containers: []
	W1002 11:59:30.325622  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:30.325630  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:30.325691  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:30.372727  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.372760  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.372766  384965 cri.go:89] found id: ""
	I1002 11:59:30.372776  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:30.372838  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.377541  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.382371  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:30.382403  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:30.449081  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:30.449132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.519339  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:30.519392  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.566205  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:30.566250  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.607933  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:30.607973  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:30.655904  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:30.655946  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.717563  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:30.717619  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.779216  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:30.779268  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.822075  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:30.822114  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:31.180609  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:31.180664  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:31.196239  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:31.196274  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:31.345274  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:31.345318  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:31.392175  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:31.392212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:33.946599  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:33.946635  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.946643  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.946650  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.946656  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.946659  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.946664  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.946677  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.946687  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.946704  384965 system_pods.go:74] duration metric: took 3.947840874s to wait for pod list to return data ...
	I1002 11:59:33.946715  384965 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:33.950028  384965 default_sa.go:45] found service account: "default"
	I1002 11:59:33.950059  384965 default_sa.go:55] duration metric: took 3.333093ms for default service account to be created ...
	I1002 11:59:33.950069  384965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:33.956623  384965 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:33.956651  384965 system_pods.go:89] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.956657  384965 system_pods.go:89] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.956662  384965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.956666  384965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.956670  384965 system_pods.go:89] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.956674  384965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.956681  384965 system_pods.go:89] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.956686  384965 system_pods.go:89] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.956694  384965 system_pods.go:126] duration metric: took 6.618721ms to wait for k8s-apps to be running ...
	I1002 11:59:33.956704  384965 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:33.956749  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:33.976674  384965 system_svc.go:56] duration metric: took 19.952308ms WaitForService to wait for kubelet.
	I1002 11:59:33.976710  384965 kubeadm.go:581] duration metric: took 4m24.137472355s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:33.976750  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:33.982173  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:33.982211  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:33.982227  384965 node_conditions.go:105] duration metric: took 5.470843ms to run NodePressure ...
	I1002 11:59:33.982242  384965 start.go:228] waiting for startup goroutines ...
	I1002 11:59:33.982251  384965 start.go:233] waiting for cluster config update ...
	I1002 11:59:33.982303  384965 start.go:242] writing updated cluster config ...
	I1002 11:59:33.982687  384965 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:34.039684  384965 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:34.041739  384965 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-777999" cluster and "default" namespace by default
	I1002 11:59:32.723475  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:35.221523  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:32.973400  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.473644  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.973820  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.473607  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.973848  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.473328  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.973485  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.473888  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.973837  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.473514  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.973633  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.094807  384787 kubeadm.go:1081] duration metric: took 11.38520709s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:38.094846  384787 kubeadm.go:406] StartCluster complete in 5m11.722637512s
	I1002 11:59:38.094872  384787 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.094972  384787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:38.097201  384787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.097495  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:38.097829  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:59:38.097966  384787 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:38.098056  384787 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-487027"
	I1002 11:59:38.098079  384787 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-487027"
	I1002 11:59:38.098083  384787 addons.go:69] Setting default-storageclass=true in profile "embed-certs-487027"
	I1002 11:59:38.098098  384787 addons.go:69] Setting metrics-server=true in profile "embed-certs-487027"
	I1002 11:59:38.098110  384787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-487027"
	I1002 11:59:38.098113  384787 addons.go:231] Setting addon metrics-server=true in "embed-certs-487027"
	W1002 11:59:38.098125  384787 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:38.098177  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098608  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098643  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098647  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1002 11:59:38.098092  384787 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:38.098827  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098670  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.099207  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.099235  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.118215  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I1002 11:59:38.118691  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.119232  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.119260  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.119649  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.120147  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.120182  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.129398  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1002 11:59:38.129652  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1002 11:59:38.130092  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.130723  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.130746  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.131301  384787 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-487027" context rescaled to 1 replicas
	I1002 11:59:38.131342  384787 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:38.133196  384787 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:38.134675  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:38.132825  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.134964  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.135242  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.135408  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.135434  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.135834  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.136413  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.136455  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.138974  384787 addons.go:231] Setting addon default-storageclass=true in "embed-certs-487027"
	W1002 11:59:38.138995  384787 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:38.139025  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.139434  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.139469  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.141226  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1002 11:59:38.141643  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.142086  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.142104  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.142433  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.142609  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.144425  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.146525  384787 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:38.148187  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:38.148204  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:38.148227  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.152187  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152549  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.152575  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152783  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.152988  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.153139  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.153280  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.157114  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 11:59:38.157655  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.158192  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.158211  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.158619  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.159253  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.159290  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.159506  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I1002 11:59:38.159895  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.160383  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.160395  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.160727  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.160902  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.162835  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.164490  384787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:37.211498  384505 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504818 seconds
	I1002 11:59:37.211660  384505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:37.229976  384505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:37.759297  384505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:37.759467  384505 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-749860 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:59:38.284135  384505 kubeadm.go:322] [bootstrap-token] Using token: rt49x4.7033jvaiaszsonci
	I1002 11:59:38.285950  384505 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:38.286108  384505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:38.299290  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:38.306326  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:38.312137  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:38.320028  384505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:38.439411  384505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:38.704007  384505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:38.705937  384505 kubeadm.go:322] 
	I1002 11:59:38.706075  384505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:38.706096  384505 kubeadm.go:322] 
	I1002 11:59:38.706210  384505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:38.706221  384505 kubeadm.go:322] 
	I1002 11:59:38.706256  384505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:38.706341  384505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:38.706433  384505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:38.706448  384505 kubeadm.go:322] 
	I1002 11:59:38.706527  384505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:38.706614  384505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:38.706701  384505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:38.706712  384505 kubeadm.go:322] 
	I1002 11:59:38.706805  384505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 11:59:38.706898  384505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:38.706910  384505 kubeadm.go:322] 
	I1002 11:59:38.707003  384505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707134  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:38.707169  384505 kubeadm.go:322]     --control-plane 	  
	I1002 11:59:38.707179  384505 kubeadm.go:322] 
	I1002 11:59:38.707272  384505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:38.707283  384505 kubeadm.go:322] 
	I1002 11:59:38.707373  384505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707500  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:38.708451  384505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:38.708478  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:59:38.708501  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:38.710166  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:38.711596  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:38.725385  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:38.748155  384505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:38.748294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.748295  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-749860 minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.795585  384505 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:39.068200  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.166036  384787 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.166047  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:38.166063  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.169435  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.169903  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.169929  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.170098  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.170273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.170517  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.170711  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.177450  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I1002 11:59:38.178044  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.178596  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.178616  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.179009  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.179244  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.181209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.181596  384787 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.181613  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:38.181641  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.185272  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.185785  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.185813  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.186245  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.186539  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.186748  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.186938  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.337092  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:38.337129  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:38.379388  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.389992  384787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.390060  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:38.399264  384787 node_ready.go:49] node "embed-certs-487027" has status "Ready":"True"
	I1002 11:59:38.399295  384787 node_ready.go:38] duration metric: took 9.264648ms waiting for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.399308  384787 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:38.401885  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:38.401909  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:38.406757  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.438158  384787 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.458749  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.458784  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:38.517143  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.547128  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.547161  384787 pod_ready.go:81] duration metric: took 108.899374ms waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.547176  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744560  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.744587  384787 pod_ready.go:81] duration metric: took 197.40322ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744598  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852242  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.852277  384787 pod_ready.go:81] duration metric: took 107.671499ms waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852294  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.017545  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638113738s)
	I1002 11:59:41.017602  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017613  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017597  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.627499125s)
	I1002 11:59:41.017658  384787 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:41.017718  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.610925223s)
	I1002 11:59:41.017747  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017759  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017907  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.017960  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.017977  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017994  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018535  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018549  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018559  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.018568  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018636  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018645  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018679  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019046  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019049  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.019064  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.027153  384787 pod_ready.go:102] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.049978  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.050007  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.050369  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.050391  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.100800  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.583606696s)
	I1002 11:59:41.100870  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.100900  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101237  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101258  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101268  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.101278  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101576  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.101621  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101634  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101647  384787 addons.go:467] Verifying addon metrics-server=true in "embed-certs-487027"
	I1002 11:59:41.103637  384787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:37.222165  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:39.223800  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.105142  384787 addons.go:502] enable addons completed in 3.007188775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:41.492039  384787 pod_ready.go:92] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.492067  384787 pod_ready.go:81] duration metric: took 2.639765498s waiting for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.492081  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500950  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.500979  384787 pod_ready.go:81] duration metric: took 8.889098ms waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500990  384787 pod_ready.go:38] duration metric: took 3.101668727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:41.501012  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:41.501079  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:41.533141  384787 api_server.go:72] duration metric: took 3.401757173s to wait for apiserver process to appear ...
	I1002 11:59:41.533167  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:41.533183  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:59:41.543027  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:59:41.545456  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:41.545483  384787 api_server.go:131] duration metric: took 12.308941ms to wait for apiserver health ...
	I1002 11:59:41.545494  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:41.556090  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:41.556183  384787 system_pods.go:61] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.556209  384787 system_pods.go:61] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.556247  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.556272  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.556290  384787 system_pods.go:61] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.556306  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.556329  384787 system_pods.go:61] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.556366  384787 system_pods.go:61] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.556392  384787 system_pods.go:74] duration metric: took 10.889958ms to wait for pod list to return data ...
	I1002 11:59:41.556412  384787 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:41.594659  384787 default_sa.go:45] found service account: "default"
	I1002 11:59:41.594690  384787 default_sa.go:55] duration metric: took 38.261546ms for default service account to be created ...
	I1002 11:59:41.594701  384787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:41.800342  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:41.800375  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.800382  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.800388  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.800393  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.800397  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.800401  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.800407  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.800412  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.800431  384787 retry.go:31] will retry after 300.830497ms: missing components: kube-dns
	I1002 11:59:42.116978  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.117028  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.117039  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.117048  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.117058  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.117064  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.117071  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.117080  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.117089  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.117109  384787 retry.go:31] will retry after 380.49084ms: missing components: kube-dns
	I1002 11:59:42.506867  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.506901  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.506908  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.506914  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.506919  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.506923  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.506927  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.506933  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.506939  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.506954  384787 retry.go:31] will retry after 409.062449ms: missing components: kube-dns
	I1002 11:59:42.924401  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.924443  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.924456  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.924464  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.924471  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.924477  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.924484  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.924493  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.924503  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.924524  384787 retry.go:31] will retry after 544.758887ms: missing components: kube-dns
	I1002 11:59:43.477592  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:43.477622  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Running
	I1002 11:59:43.477628  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:43.477632  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:43.477637  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:43.477640  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:43.477645  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:43.477651  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:43.477657  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Running
	I1002 11:59:43.477665  384787 system_pods.go:126] duration metric: took 1.882959518s to wait for k8s-apps to be running ...
	I1002 11:59:43.477672  384787 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:43.477714  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:43.492105  384787 system_svc.go:56] duration metric: took 14.416995ms WaitForService to wait for kubelet.
	I1002 11:59:43.492138  384787 kubeadm.go:581] duration metric: took 5.360761991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:43.492161  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:43.496739  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:43.496769  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:43.496785  384787 node_conditions.go:105] duration metric: took 4.61842ms to run NodePressure ...
	I1002 11:59:43.496801  384787 start.go:228] waiting for startup goroutines ...
	I1002 11:59:43.496810  384787 start.go:233] waiting for cluster config update ...
	I1002 11:59:43.496823  384787 start.go:242] writing updated cluster config ...
	I1002 11:59:43.497156  384787 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:43.568627  384787 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:43.570324  384787 out.go:177] * Done! kubectl is now configured to use "embed-certs-487027" cluster and "default" namespace by default
	I1002 11:59:39.194035  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:39.810338  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.310222  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.809912  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.310004  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.810506  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.309581  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.810312  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.310294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.809602  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.722699  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.221300  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.309927  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:44.810169  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.310095  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.809546  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.310144  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.809605  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.310487  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.809697  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.309464  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.809680  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.723036  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.220863  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:51.221417  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.310000  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:49.809922  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.310214  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.809728  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.309659  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.809723  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.309837  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.809788  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.309655  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.809468  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.310103  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.810421  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.968150  384505 kubeadm.go:1081] duration metric: took 16.219921091s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:54.968184  384505 kubeadm.go:406] StartCluster complete in 5m46.426951815s
	I1002 11:59:54.968203  384505 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.968302  384505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:54.970101  384505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.970429  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:54.970599  384505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:54.970672  384505 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970692  384505 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-749860"
	W1002 11:59:54.970703  384505 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:54.970723  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:59:54.970753  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.970775  384505 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970792  384505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-749860"
	I1002 11:59:54.971196  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971204  384505 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-749860"
	I1002 11:59:54.971226  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971199  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971240  384505 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-749860"
	W1002 11:59:54.971251  384505 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:54.971281  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971297  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.971669  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971707  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.989112  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 11:59:54.989701  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.989819  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1002 11:59:54.989971  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 11:59:54.990503  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990552  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990574  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.990592  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.990975  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991042  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991062  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991094  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991110  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991327  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:54.991555  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991596  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.992169  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992183  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992197  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.992206  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.998018  384505 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-749860"
	W1002 11:59:54.998043  384505 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:54.998067  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.998716  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.003322  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.020037  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1002 11:59:55.020659  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.021292  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.021313  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.021707  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.021896  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.022155  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1002 11:59:55.022286  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I1002 11:59:55.022697  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024740  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.024793  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024824  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.024839  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.027065  384505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:55.025237  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.025561  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.028415  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.028568  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:55.028579  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:55.028596  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.028867  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.029051  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.030397  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.030424  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.031461  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.033181  384505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:55.032032  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.032651  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.034670  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.034698  384505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.034703  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.034711  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:55.034727  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.034894  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.035089  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.035269  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.046534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046573  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.046599  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046629  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.046888  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.047102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.047276  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.051887  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1002 11:59:55.052372  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.052940  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.052970  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.053349  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.053558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.055503  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.055762  384505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.055780  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:55.055805  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.062494  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062526  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.062542  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062550  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.062752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.062922  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.063162  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.103907  384505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-749860" context rescaled to 1 replicas
	I1002 11:59:55.103958  384505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:55.105626  384505 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:53.722331  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:54.914848  384344 pod_ready.go:81] duration metric: took 4m0.000973055s waiting for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:54.914899  384344 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:54.914926  384344 pod_ready.go:38] duration metric: took 4m12.745047876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:54.914963  384344 kubeadm.go:640] restartCluster took 4m32.83554771s
	W1002 11:59:54.915062  384344 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:54.915098  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:55.106948  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:55.283274  384505 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.283336  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:55.291603  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:55.291629  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:55.297775  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.321901  384505 node_ready.go:49] node "old-k8s-version-749860" has status "Ready":"True"
	I1002 11:59:55.321927  384505 node_ready.go:38] duration metric: took 38.615436ms waiting for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.321939  384505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:55.327570  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.355612  384505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:55.357164  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:55.357187  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:55.423852  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:55.423883  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:55.477683  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:56.041846  384505 start.go:923] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:56.230394  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230432  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230466  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230488  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230810  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230869  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230888  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230913  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230936  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230890  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230969  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230990  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.231024  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.231326  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231341  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231652  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231667  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231740  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.327260  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.327289  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.327633  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.327654  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547462  384505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.069727635s)
	I1002 11:59:56.547536  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.547549  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.547901  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.547948  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.547974  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547993  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.548010  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.548288  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.548321  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.548322  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.548333  384505 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-749860"
	I1002 11:59:56.550084  384505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:56.551798  384505 addons.go:502] enable addons completed in 1.581195105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:57.554993  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:59.933613  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:01.937565  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:04.431925  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:05.433988  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.434013  384505 pod_ready.go:81] duration metric: took 10.078369703s waiting for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.434029  384505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441501  384505 pod_ready.go:92] pod "kube-proxy-mdtp5" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.441534  384505 pod_ready.go:81] duration metric: took 7.496823ms waiting for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441543  384505 pod_ready.go:38] duration metric: took 10.1195912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:05.441592  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:05.441680  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:05.460054  384505 api_server.go:72] duration metric: took 10.356049869s to wait for apiserver process to appear ...
	I1002 12:00:05.460080  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:05.460100  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 12:00:05.466796  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 12:00:05.467813  384505 api_server.go:141] control plane version: v1.16.0
	I1002 12:00:05.467845  384505 api_server.go:131] duration metric: took 7.75678ms to wait for apiserver health ...
	I1002 12:00:05.467855  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:05.472349  384505 system_pods.go:59] 4 kube-system pods found
	I1002 12:00:05.472384  384505 system_pods.go:61] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.472391  384505 system_pods.go:61] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.472401  384505 system_pods.go:61] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.472410  384505 system_pods.go:61] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.472433  384505 system_pods.go:74] duration metric: took 4.569442ms to wait for pod list to return data ...
	I1002 12:00:05.472446  384505 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:05.476327  384505 default_sa.go:45] found service account: "default"
	I1002 12:00:05.476349  384505 default_sa.go:55] duration metric: took 3.895344ms for default service account to be created ...
	I1002 12:00:05.476357  384505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:05.480522  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.480545  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.480550  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.480557  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.480563  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.480579  384505 retry.go:31] will retry after 270.891275ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:05.757515  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.757555  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.757563  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.757574  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.757585  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.757603  384505 retry.go:31] will retry after 336.725562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.099945  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.099978  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.099985  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.099995  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.100002  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.100024  384505 retry.go:31] will retry after 389.53153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.504317  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.504354  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.504362  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.504375  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.504385  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.504407  384505 retry.go:31] will retry after 453.465732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.962509  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.962534  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.962539  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.962546  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.962552  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.962568  384505 retry.go:31] will retry after 489.820063ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:07.457422  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:07.457451  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:07.457456  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:07.457465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:07.457472  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:07.457490  384505 retry.go:31] will retry after 931.079053ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:08.394500  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:08.394527  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:08.394532  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:08.394538  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:08.394546  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:08.394562  384505 retry.go:31] will retry after 929.512162ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:09.216426  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.301296702s)
	I1002 12:00:09.216493  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:09.230712  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:00:09.239588  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:00:09.248624  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:00:09.248677  384344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:00:09.466935  384344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:00:09.329677  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:09.329709  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:09.329714  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:09.329722  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:09.329728  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:09.329746  384505 retry.go:31] will retry after 898.08397ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:10.232119  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:10.232155  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:10.232163  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:10.232176  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:10.232185  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:10.232212  384505 retry.go:31] will retry after 1.809149678s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:12.047424  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:12.047452  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:12.047458  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:12.047465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:12.047471  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:12.047487  384505 retry.go:31] will retry after 2.054960799s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:14.109048  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:14.109080  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:14.109088  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:14.109098  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:14.109108  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:14.109128  384505 retry.go:31] will retry after 2.523219254s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:16.640373  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:16.640399  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:16.640405  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:16.640412  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:16.640419  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:16.640436  384505 retry.go:31] will retry after 2.61022195s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:19.606412  384344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:00:19.606505  384344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:00:19.606620  384344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:00:19.606760  384344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:00:19.606856  384344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:00:19.606912  384344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:00:19.608541  384344 out.go:204]   - Generating certificates and keys ...
	I1002 12:00:19.608638  384344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:00:19.608743  384344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:00:19.608891  384344 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 12:00:19.608999  384344 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 12:00:19.609113  384344 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 12:00:19.609193  384344 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 12:00:19.609276  384344 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 12:00:19.609360  384344 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 12:00:19.609453  384344 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 12:00:19.609548  384344 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 12:00:19.609624  384344 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 12:00:19.609694  384344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:00:19.609761  384344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:00:19.609833  384344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:00:19.609916  384344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:00:19.609991  384344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:00:19.610100  384344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:00:19.610182  384344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:00:19.611696  384344 out.go:204]   - Booting up control plane ...
	I1002 12:00:19.611810  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:00:19.611916  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:00:19.612021  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:00:19.612173  384344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:00:19.612294  384344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:00:19.612346  384344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:00:19.612576  384344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:00:19.612683  384344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 12:00:19.612825  384344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:00:19.612943  384344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:00:19.613026  384344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:00:19.613215  384344 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-304121 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:00:19.613266  384344 kubeadm.go:322] [bootstrap-token] Using token: pd40pp.2tkeaw4x1d1qfkq9
	I1002 12:00:19.614472  384344 out.go:204]   - Configuring RBAC rules ...
	I1002 12:00:19.614593  384344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:00:19.614706  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:00:19.614912  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:00:19.615054  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:00:19.615220  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:00:19.615315  384344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:00:19.615474  384344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:00:19.615540  384344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:00:19.615622  384344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:00:19.615633  384344 kubeadm.go:322] 
	I1002 12:00:19.615725  384344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:00:19.615747  384344 kubeadm.go:322] 
	I1002 12:00:19.615851  384344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:00:19.615864  384344 kubeadm.go:322] 
	I1002 12:00:19.615894  384344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:00:19.615997  384344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:00:19.616084  384344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:00:19.616094  384344 kubeadm.go:322] 
	I1002 12:00:19.616143  384344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:00:19.616152  384344 kubeadm.go:322] 
	I1002 12:00:19.616222  384344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:00:19.616240  384344 kubeadm.go:322] 
	I1002 12:00:19.616321  384344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:00:19.616420  384344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:00:19.616532  384344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:00:19.616548  384344 kubeadm.go:322] 
	I1002 12:00:19.616640  384344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:00:19.616734  384344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:00:19.616743  384344 kubeadm.go:322] 
	I1002 12:00:19.616857  384344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617005  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:00:19.617049  384344 kubeadm.go:322] 	--control-plane 
	I1002 12:00:19.617059  384344 kubeadm.go:322] 
	I1002 12:00:19.617136  384344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:00:19.617142  384344 kubeadm.go:322] 
	I1002 12:00:19.617238  384344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617333  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:00:19.617371  384344 cni.go:84] Creating CNI manager for ""
	I1002 12:00:19.617384  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:00:19.618962  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:00:19.620215  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:00:19.650698  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:00:19.699458  384344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:00:19.699594  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=no-preload-304121 minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.699598  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.810984  384344 ops.go:34] apiserver oom_adj: -16
	I1002 12:00:20.114460  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.245669  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.876563  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.256294  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:19.256319  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:19.256325  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:19.256332  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:19.256338  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:19.256355  384505 retry.go:31] will retry after 3.270215577s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:22.532684  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:22.532714  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:22.532723  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:22.532730  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:22.532737  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:22.532754  384505 retry.go:31] will retry after 5.273561216s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:21.376620  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:21.876453  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.376537  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.876967  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.377242  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.876469  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.376391  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.877422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.376422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.877251  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.810777  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:27.810810  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:27.810816  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:27.810822  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:27.810828  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:27.810845  384505 retry.go:31] will retry after 6.34425242s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:26.376388  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:26.877267  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.376480  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.877214  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.376560  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.876964  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.377314  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.877135  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.377301  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.876525  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.376660  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.876991  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.376934  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.584774  384344 kubeadm.go:1081] duration metric: took 12.88524826s to wait for elevateKubeSystemPrivileges.
	I1002 12:00:32.584821  384344 kubeadm.go:406] StartCluster complete in 5m10.55691254s
	I1002 12:00:32.584849  384344 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.584955  384344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:00:32.587722  384344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.588018  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:00:32.588146  384344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:00:32.588230  384344 addons.go:69] Setting default-storageclass=true in profile "no-preload-304121"
	I1002 12:00:32.588251  384344 addons.go:69] Setting metrics-server=true in profile "no-preload-304121"
	I1002 12:00:32.588265  384344 addons.go:231] Setting addon metrics-server=true in "no-preload-304121"
	W1002 12:00:32.588273  384344 addons.go:240] addon metrics-server should already be in state true
	I1002 12:00:32.588252  384344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-304121"
	I1002 12:00:32.588323  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:00:32.588333  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588229  384344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-304121"
	I1002 12:00:32.588387  384344 addons.go:231] Setting addon storage-provisioner=true in "no-preload-304121"
	W1002 12:00:32.588397  384344 addons.go:240] addon storage-provisioner should already be in state true
	I1002 12:00:32.588433  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588695  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588731  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588737  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588777  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588867  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588891  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.612093  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I1002 12:00:32.612118  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I1002 12:00:32.612252  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1002 12:00:32.612652  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612799  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612847  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.613307  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613337  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613432  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613504  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613715  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.613718  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613838  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613955  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.614146  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614197  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614802  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.614842  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.615497  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.615534  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.617844  384344 addons.go:231] Setting addon default-storageclass=true in "no-preload-304121"
	W1002 12:00:32.617884  384344 addons.go:240] addon default-storageclass should already be in state true
	I1002 12:00:32.617914  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.618326  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.618436  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.634123  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I1002 12:00:32.634849  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.634953  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 12:00:32.635328  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.635470  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635495  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635819  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635841  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635867  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636193  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636340  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.636373  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.636435  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.637717  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1002 12:00:32.638051  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.640160  384344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 12:00:32.642288  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 12:00:32.642300  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 12:00:32.642314  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.640240  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.642837  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.642863  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.643527  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.643695  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.645514  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.645565  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.648157  384344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:00:32.645977  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.646152  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.650297  384344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.650313  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:00:32.650328  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.650380  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.650547  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.650823  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.650961  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.653953  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654560  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.654592  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654886  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.655049  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.655195  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.655410  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.658005  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1002 12:00:32.658525  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.659046  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.659059  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.659478  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.659611  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.661708  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.661982  384344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:32.661998  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:00:32.662018  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.664637  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665005  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.665023  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665161  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.665335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.665426  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.665610  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.723429  384344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-304121" context rescaled to 1 replicas
	I1002 12:00:32.723469  384344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:00:32.725329  384344 out.go:177] * Verifying Kubernetes components...
	I1002 12:00:32.726924  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:32.860425  384344 node_ready.go:35] waiting up to 6m0s for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.860515  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:00:32.904658  384344 node_ready.go:49] node "no-preload-304121" has status "Ready":"True"
	I1002 12:00:32.904689  384344 node_ready.go:38] duration metric: took 44.230643ms waiting for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.904705  384344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:32.949887  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:32.984050  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.997841  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 12:00:32.997869  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 12:00:32.999235  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:33.082015  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 12:00:33.082051  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 12:00:33.326524  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:33.326554  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 12:00:33.403533  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:34.844716  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984135314s)
	I1002 12:00:34.844752  384344 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 12:00:35.114639  384344 pod_ready.go:102] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:35.538571  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.55447937s)
	I1002 12:00:35.538624  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538641  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.538652  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.539381648s)
	I1002 12:00:35.538700  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538713  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539005  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539027  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539039  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539049  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539137  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539162  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539176  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539194  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539203  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539299  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539328  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539341  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539537  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539588  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539622  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.596015  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.596048  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.596384  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.596431  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.596449  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.641915  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.238327482s)
	I1002 12:00:35.641985  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642007  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642363  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642389  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642399  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642409  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642423  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.642716  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642739  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642750  384344 addons.go:467] Verifying addon metrics-server=true in "no-preload-304121"
	I1002 12:00:35.644696  384344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 12:00:35.646046  384344 addons.go:502] enable addons completed in 3.05790546s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 12:00:36.113386  384344 pod_ready.go:92] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.113415  384344 pod_ready.go:81] duration metric: took 3.163496821s waiting for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.113429  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.116264  384344 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116290  384344 pod_ready.go:81] duration metric: took 2.85415ms waiting for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	E1002 12:00:36.116300  384344 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116306  384344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126555  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.126575  384344 pod_ready.go:81] duration metric: took 10.262082ms waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126583  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137876  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.137903  384344 pod_ready.go:81] duration metric: took 11.312511ms waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137916  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146526  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.146549  384344 pod_ready.go:81] duration metric: took 8.624341ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146561  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307205  384344 pod_ready.go:92] pod "kube-proxy-sprhm" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.307231  384344 pod_ready.go:81] duration metric: took 160.663088ms waiting for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307241  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707429  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.707455  384344 pod_ready.go:81] duration metric: took 400.207608ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707463  384344 pod_ready.go:38] duration metric: took 3.802745796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:36.707480  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:36.707537  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:36.733934  384344 api_server.go:72] duration metric: took 4.010431274s to wait for apiserver process to appear ...
	I1002 12:00:36.733962  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:36.733979  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 12:00:36.740562  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 12:00:36.742234  384344 api_server.go:141] control plane version: v1.28.2
	I1002 12:00:36.742259  384344 api_server.go:131] duration metric: took 8.289515ms to wait for apiserver health ...
	I1002 12:00:36.742270  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:36.910934  384344 system_pods.go:59] 8 kube-system pods found
	I1002 12:00:36.910962  384344 system_pods.go:61] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:36.910967  384344 system_pods.go:61] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:36.910971  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:36.910976  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:36.910980  384344 system_pods.go:61] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:36.910983  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:36.910991  384344 system_pods.go:61] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:36.911002  384344 system_pods.go:61] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 12:00:36.911013  384344 system_pods.go:74] duration metric: took 168.734676ms to wait for pod list to return data ...
	I1002 12:00:36.911027  384344 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:37.106994  384344 default_sa.go:45] found service account: "default"
	I1002 12:00:37.107038  384344 default_sa.go:55] duration metric: took 196.001935ms for default service account to be created ...
	I1002 12:00:37.107050  384344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:37.310973  384344 system_pods.go:86] 8 kube-system pods found
	I1002 12:00:37.311012  384344 system_pods.go:89] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:37.311021  384344 system_pods.go:89] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:37.311028  384344 system_pods.go:89] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:37.311034  384344 system_pods.go:89] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:37.311041  384344 system_pods.go:89] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:37.311049  384344 system_pods.go:89] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:37.311060  384344 system_pods.go:89] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:37.311075  384344 system_pods.go:89] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Running
	I1002 12:00:37.311093  384344 system_pods.go:126] duration metric: took 204.035391ms to wait for k8s-apps to be running ...
	I1002 12:00:37.311103  384344 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:00:37.311158  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:37.327711  384344 system_svc.go:56] duration metric: took 16.597865ms WaitForService to wait for kubelet.
	I1002 12:00:37.327736  384344 kubeadm.go:581] duration metric: took 4.604243467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:00:37.327758  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:00:37.506633  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:00:37.506693  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 12:00:37.506708  384344 node_conditions.go:105] duration metric: took 178.94359ms to run NodePressure ...
	I1002 12:00:37.506722  384344 start.go:228] waiting for startup goroutines ...
	I1002 12:00:37.506728  384344 start.go:233] waiting for cluster config update ...
	I1002 12:00:37.506738  384344 start.go:242] writing updated cluster config ...
	I1002 12:00:37.506999  384344 ssh_runner.go:195] Run: rm -f paused
	I1002 12:00:37.558171  384344 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:00:37.560280  384344 out.go:177] * Done! kubectl is now configured to use "no-preload-304121" cluster and "default" namespace by default
	I1002 12:00:34.160478  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:34.160520  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:34.160528  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:34.160540  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:34.160553  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:34.160577  384505 retry.go:31] will retry after 8.056057378s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:42.223209  384505 system_pods.go:86] 5 kube-system pods found
	I1002 12:00:42.223242  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:42.223251  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Pending
	I1002 12:00:42.223257  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:42.223267  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:42.223276  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:42.223299  384505 retry.go:31] will retry after 9.279474557s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:51.510907  384505 system_pods.go:86] 6 kube-system pods found
	I1002 12:00:51.510937  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:51.510945  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:00:51.510949  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Pending
	I1002 12:00:51.510953  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:51.510959  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:51.510965  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:51.510995  384505 retry.go:31] will retry after 9.19295244s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:01:00.712167  384505 system_pods.go:86] 8 kube-system pods found
	I1002 12:01:00.712195  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:01:00.712201  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:01:00.712205  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Running
	I1002 12:01:00.712209  384505 system_pods.go:89] "kube-controller-manager-old-k8s-version-749860" [1531e118-f1f1-485e-b258-32e21b3385d8] Running
	I1002 12:01:00.712213  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:01:00.712217  384505 system_pods.go:89] "kube-scheduler-old-k8s-version-749860" [66983e5c-64ab-48ec-9c24-824f0a7cb36e] Running
	I1002 12:01:00.712223  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:01:00.712230  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:01:00.712237  384505 system_pods.go:126] duration metric: took 55.235875161s to wait for k8s-apps to be running ...
	I1002 12:01:00.712244  384505 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:01:00.712293  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:01:00.728970  384505 system_svc.go:56] duration metric: took 16.712185ms WaitForService to wait for kubelet.
	I1002 12:01:00.728999  384505 kubeadm.go:581] duration metric: took 1m5.625005524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:01:00.729026  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:01:00.733153  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:01:00.733180  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 12:01:00.733196  384505 node_conditions.go:105] duration metric: took 4.162147ms to run NodePressure ...
	I1002 12:01:00.733209  384505 start.go:228] waiting for startup goroutines ...
	I1002 12:01:00.733216  384505 start.go:233] waiting for cluster config update ...
	I1002 12:01:00.733230  384505 start.go:242] writing updated cluster config ...
	I1002 12:01:00.733553  384505 ssh_runner.go:195] Run: rm -f paused
	I1002 12:01:00.784237  384505 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 12:01:00.786178  384505 out.go:177] 
	W1002 12:01:00.787686  384505 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 12:01:00.789104  384505 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 12:01:00.790521  384505 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-749860" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:31 UTC, ends at Mon 2023-10-02 12:08:35 UTC. --
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.807924500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=12be2844-ac31-4282-bace-9821cf2e1b3e name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.809343250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f570eed4-236f-41df-98c2-5f9543f6f312 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.809869198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248515809854203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f570eed4-236f-41df-98c2-5f9543f6f312 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.810284368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d02ccd13-3652-4b46-b576-962fb3037b07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.810358050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d02ccd13-3652-4b46-b576-962fb3037b07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.810666834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d02ccd13-3652-4b46-b576-962fb3037b07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.857073555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ddce5fa9-04d6-44a7-a1f0-0829ef9115d8 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.857202379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ddce5fa9-04d6-44a7-a1f0-0829ef9115d8 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.858477202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c4e2866e-f691-43f1-ad5b-6496126e398f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.859256642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248515859238038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c4e2866e-f691-43f1-ad5b-6496126e398f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.859737287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=70ccda76-3324-411c-aacf-09abe2517f03 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.859811173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=70ccda76-3324-411c-aacf-09abe2517f03 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.860072268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=70ccda76-3324-411c-aacf-09abe2517f03 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.884265085Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=dabb3ec5-cf17-49aa-9485-ca5058343181 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.884478797Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7e7f8435-3c92-447f-ad2c-c3e7da52e094,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247714842285166,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02T11:55:06.827373819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-9wv56,Uid:f04d6125-ea28-41cc-9251-7ccee27162bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169624
7714818755469,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02T11:55:06.827382813Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bd161e6b0d0ba1cdecdb7c8b169a6a819700611886a611c7561c447b1eb067c9,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-wk2c7,Uid:f28e9db7-2182-40d8-85a7-fa40c2ff8077,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247710928657401,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-wk2c7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28e9db7-2182-40d8-85a7-fa40c2ff8077,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02
T11:55:06.827384715Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aff1275b-909d-4c70-9fb5-cb36170c591e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247707182646573,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-02T11:55:06.827371792Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&PodSandboxMetadata{Name:kube-proxy-gchnc,Uid:061811c7-2ac8-448a-b441-838f9aaf9145,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247707172286267,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 061811c7-2ac8-448a-b441-838f9aaf9145,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2023-10-02T11:55:06.827381049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-777999,Uid:b6c3d20afce7e1c07e31633c2522947a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247699396035098,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.251:8444,kubernetes.io/config.hash: b6c3d20afce7e1c07e31633c2522947a,kubernetes.io/config.seen: 2023-10-02T11:54:58.827817966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce639
6c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-777999,Uid:7670623f64278461b660148b22f51806,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247699374082485,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7670623f64278461b660148b22f51806,kubernetes.io/config.seen: 2023-10-02T11:54:58.827819797Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-777999,Uid:e0d4795119f3d4d980acb130288fbaca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247699347874902,Labels:map[string]string{component: kube-controller-manager,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d4795119f3d4d980acb130288fbaca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0d4795119f3d4d980acb130288fbaca,kubernetes.io/config.seen: 2023-10-02T11:54:58.827818987Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-777999,Uid:e66c9b627bcf9a6af934f21fc5eb0505,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696247699343783524,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.61.251:2379,kubernetes.io/config.hash: e66c9b627bcf9a6af934f21fc5eb0505,kubernetes.io/config.seen: 2023-10-02T11:54:58.827812304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=dabb3ec5-cf17-49aa-9485-ca5058343181 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.885639788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4d18447-07a9-4bd1-b5d5-1845277f24e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.885690962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4d18447-07a9-4bd1-b5d5-1845277f24e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.885924330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4d18447-07a9-4bd1-b5d5-1845277f24e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.906730446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0bc9b648-5ac7-4a3f-b0bd-759ebac6fc2e name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.906780044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0bc9b648-5ac7-4a3f-b0bd-759ebac6fc2e name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.908133556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1cf76cf1-7eca-4bb6-b953-8f90b26ca885 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.908478515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248515908468231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1cf76cf1-7eca-4bb6-b953-8f90b26ca885 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.909647677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=32ee12ce-d2be-4f37-8f34-f77c2dea15d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.909692208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=32ee12ce-d2be-4f37-8f34-f77c2dea15d0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:35 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:08:35.909873386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=32ee12ce-d2be-4f37-8f34-f77c2dea15d0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d3596d8e4114       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   75e776743a341       storage-provisioner
	5279fe29e84fd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   30b85178c495c       busybox
	f4357b618abec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   5c05ef8d8ce5e       coredns-5dd5756b68-9wv56
	d858d8eba37bc       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      13 minutes ago      Running             kube-proxy                1                   a4855a476b0c0       kube-proxy-gchnc
	b5dd54a6498cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   75e776743a341       storage-provisioner
	8b9af145fa743       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   f8358a2d762d9       etcd-default-k8s-diff-port-777999
	7a5a17cf18027       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      13 minutes ago      Running             kube-scheduler            1                   48c194167752c       kube-scheduler-default-k8s-diff-port-777999
	3d34e284efffd       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      13 minutes ago      Running             kube-apiserver            1                   a66bf166b0a00       kube-apiserver-default-k8s-diff-port-777999
	beb885cf3eedd       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      13 minutes ago      Running             kube-controller-manager   1                   fd604b3ba2169       kube-controller-manager-default-k8s-diff-port-777999
	
	* 
	* ==> coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39911 - 12955 "HINFO IN 5381547072923470623.3344521106857374535. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.062853859s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-777999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-777999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=default-k8s-diff-port-777999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_46_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:46:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-777999
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:08:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:05:48 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:05:48 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:05:48 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:05:48 +0000   Mon, 02 Oct 2023 11:55:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.251
	  Hostname:    default-k8s-diff-port-777999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8779df539c584f4fa2a1664ce1ea848f
	  System UUID:                8779df53-9c58-4f4f-a2a1-664ce1ea848f
	  Boot ID:                    8e86307d-4f39-4d78-b17c-0c82039497a9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-9wv56                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-777999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-777999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-777999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-gchnc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-777999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-wk2c7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-777999 event: Registered Node default-k8s-diff-port-777999 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-777999 event: Registered Node default-k8s-diff-port-777999 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571556] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387820] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152188] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.638291] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.009684] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.122997] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.179985] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.135871] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.278204] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.876847] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[Oct 2 11:55] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] <==
	* {"level":"warn","ts":"2023-10-02T11:55:06.456715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.932261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.251\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-10-02T11:55:06.456809Z","caller":"traceutil/trace.go:171","msg":"trace[426070056] range","detail":"{range_begin:/registry/masterleases/192.168.61.251; range_end:; response_count:1; response_revision:510; }","duration":"148.024126ms","start":"2023-10-02T11:55:06.308765Z","end":"2023-10-02T11:55:06.456789Z","steps":["trace[426070056] 'agreement among raft nodes before linearized reading'  (duration: 147.897743ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:55:09.044932Z","caller":"traceutil/trace.go:171","msg":"trace[693962018] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"163.951239ms","start":"2023-10-02T11:55:08.880961Z","end":"2023-10-02T11:55:09.044913Z","steps":["trace[693962018] 'process raft request'  (duration: 163.750795ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.045219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.760529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2023-10-02T11:55:09.045281Z","caller":"traceutil/trace.go:171","msg":"trace[237393610] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:576; }","duration":"113.851159ms","start":"2023-10-02T11:55:08.931418Z","end":"2023-10-02T11:55:09.04527Z","steps":["trace[237393610] 'agreement among raft nodes before linearized reading'  (duration: 113.571637ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:55:09.04489Z","caller":"traceutil/trace.go:171","msg":"trace[1952759054] linearizableReadLoop","detail":"{readStateIndex:618; appliedIndex:617; }","duration":"113.422901ms","start":"2023-10-02T11:55:08.931442Z","end":"2023-10-02T11:55:09.044865Z","steps":["trace[1952759054] 'read index received'  (duration: 113.226549ms)","trace[1952759054] 'applied index is now lower than readState.Index'  (duration: 194.143µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-02T11:55:09.045526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.38932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3712"}
	{"level":"info","ts":"2023-10-02T11:55:09.045636Z","caller":"traceutil/trace.go:171","msg":"trace[136419149] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:576; }","duration":"109.422053ms","start":"2023-10-02T11:55:08.936123Z","end":"2023-10-02T11:55:09.045545Z","steps":["trace[136419149] 'agreement among raft nodes before linearized reading'  (duration: 109.353073ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.476883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.15892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788\" ","response":"range_response_count:1 size:984"}
	{"level":"info","ts":"2023-10-02T11:55:09.477094Z","caller":"traceutil/trace.go:171","msg":"trace[2113176847] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788; range_end:; response_count:1; response_revision:576; }","duration":"425.378628ms","start":"2023-10-02T11:55:09.051696Z","end":"2023-10-02T11:55:09.477074Z","steps":["trace[2113176847] 'range keys from in-memory index tree'  (duration: 425.081072ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.477175Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.051682Z","time spent":"425.473682ms","remote":"127.0.0.1:55534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":1007,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788\" "}
	{"level":"info","ts":"2023-10-02T11:55:09.477881Z","caller":"traceutil/trace.go:171","msg":"trace[490392895] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"405.77717ms","start":"2023-10-02T11:55:09.072093Z","end":"2023-10-02T11:55:09.47787Z","steps":["trace[490392895] 'read index received'  (duration: 405.651106ms)","trace[490392895] 'applied index is now lower than readState.Index'  (duration: 125.499µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-02T11:55:09.478044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.985597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2023-10-02T11:55:09.478198Z","caller":"traceutil/trace.go:171","msg":"trace[883365813] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999; range_end:; response_count:1; response_revision:577; }","duration":"406.143763ms","start":"2023-10-02T11:55:09.072043Z","end":"2023-10-02T11:55:09.478187Z","steps":["trace[883365813] 'agreement among raft nodes before linearized reading'  (duration: 405.907465ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.478262Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.072029Z","time spent":"406.218449ms","remote":"127.0.0.1:55558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4369,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999\" "}
	{"level":"info","ts":"2023-10-02T11:55:09.47849Z","caller":"traceutil/trace.go:171","msg":"trace[1710003894] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"426.069524ms","start":"2023-10-02T11:55:09.052411Z","end":"2023-10-02T11:55:09.478481Z","steps":["trace[1710003894] 'process raft request'  (duration: 425.37676ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.479211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"427.269468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-10-02T11:55:09.479312Z","caller":"traceutil/trace.go:171","msg":"trace[975818954] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:576; }","duration":"427.411188ms","start":"2023-10-02T11:55:09.05189Z","end":"2023-10-02T11:55:09.479301Z","steps":["trace[975818954] 'range keys from in-memory index tree'  (duration: 424.808247ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.47935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.051885Z","time spent":"427.451698ms","remote":"127.0.0.1:55562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"warn","ts":"2023-10-02T11:55:09.479089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.052388Z","time spent":"426.16085ms","remote":"127.0.0.1:55558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3558,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:552 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3504 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2023-10-02T11:55:10.630101Z","caller":"traceutil/trace.go:171","msg":"trace[584673202] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"149.659967ms","start":"2023-10-02T11:55:10.480417Z","end":"2023-10-02T11:55:10.630077Z","steps":["trace[584673202] 'process raft request'  (duration: 149.429974ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:55:19.68063Z","caller":"traceutil/trace.go:171","msg":"trace[1514956505] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"103.629996ms","start":"2023-10-02T11:55:19.576917Z","end":"2023-10-02T11:55:19.680547Z","steps":["trace[1514956505] 'process raft request'  (duration: 100.565461ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T12:05:04.558744Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2023-10-02T12:05:04.561665Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":862,"took":"2.615615ms","hash":2110775100}
	{"level":"info","ts":"2023-10-02T12:05:04.561745Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2110775100,"revision":862,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  12:08:36 up 14 min,  0 users,  load average: 0.17, 0.13, 0.09
	Linux default-k8s-diff-port-777999 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] <==
	* I1002 12:05:06.255700       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:05:07.255738       1 handler_proxy.go:93] no RequestInfo found in the context
	W1002 12:05:07.255798       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:05:07.256021       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:05:07.256048       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1002 12:05:07.255870       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:05:07.258179       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:06:06.100304       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:06:07.257289       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:06:07.257669       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:06:07.257764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:06:07.258297       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:06:07.258363       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:06:07.259651       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:07:06.099616       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:08:06.099833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:08:07.258640       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:08:07.258819       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:08:07.258872       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:08:07.259886       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:08:07.259909       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:08:07.259915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] <==
	* I1002 12:02:50.327489       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:03:19.841407       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:03:20.336988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:03:49.851680       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:03:50.345419       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:04:19.857928       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:04:20.355698       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:04:49.864924       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:04:50.365727       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:19.872914       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:20.381855       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:49.879978       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:50.392115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:06:06.910328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="364.187µs"
	E1002 12:06:19.891378       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:20.404519       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:06:21.896509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="310.43µs"
	E1002 12:06:49.897789       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:50.413759       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:07:19.903726       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:20.423369       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:07:49.909713       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:50.432136       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:08:19.915692       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:08:20.443423       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] <==
	* I1002 11:55:08.524683       1 server_others.go:69] "Using iptables proxy"
	I1002 11:55:08.547494       1 node.go:141] Successfully retrieved node IP: 192.168.61.251
	I1002 11:55:08.615800       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:55:08.615898       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:55:08.619258       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:55:08.619355       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:55:08.619640       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:55:08.619745       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:55:08.620750       1 config.go:188] "Starting service config controller"
	I1002 11:55:08.620830       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:55:08.620877       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:55:08.620904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:55:08.623856       1 config.go:315] "Starting node config controller"
	I1002 11:55:08.623982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:55:08.721793       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:55:08.729486       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:55:08.729520       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] <==
	* I1002 11:55:03.240993       1 serving.go:348] Generated self-signed cert in-memory
	W1002 11:55:06.188368       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 11:55:06.188433       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:55:06.188449       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 11:55:06.188466       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:55:06.269359       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 11:55:06.269469       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:55:06.279840       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 11:55:06.279956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 11:55:06.279904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 11:55:06.282479       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:55:06.382814       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:31 UTC, ends at Mon 2023-10-02 12:08:36 UTC. --
	Oct 02 12:05:55 default-k8s-diff-port-777999 kubelet[929]: E1002 12:05:55.887383     929 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l9hgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wk2c7_kube-system(f28e9db7-2182-40d8-85a7-fa40c2ff8077): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:05:55 default-k8s-diff-port-777999 kubelet[929]: E1002 12:05:55.887460     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:05:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:05:58.902455     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:05:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:05:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:05:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:06:06 default-k8s-diff-port-777999 kubelet[929]: E1002 12:06:06.876931     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:06:21 default-k8s-diff-port-777999 kubelet[929]: E1002 12:06:21.876898     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:06:35 default-k8s-diff-port-777999 kubelet[929]: E1002 12:06:35.876458     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:06:47 default-k8s-diff-port-777999 kubelet[929]: E1002 12:06:47.876116     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:06:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:06:58.899793     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:06:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:06:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:06:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:07:01 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:01.875793     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:07:15 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:15.875668     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:07:30 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:30.877860     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:07:45 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:45.876272     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:07:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:58.900811     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:07:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:07:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:07:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:07:59 default-k8s-diff-port-777999 kubelet[929]: E1002 12:07:59.875965     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:08:14 default-k8s-diff-port-777999 kubelet[929]: E1002 12:08:14.875475     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:08:29 default-k8s-diff-port-777999 kubelet[929]: E1002 12:08:29.875804     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	
	* 
	* ==> storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] <==
	* I1002 11:55:39.232409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:55:39.244170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:55:39.244296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:55:56.658412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:55:56.659382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900!
	I1002 11:55:56.658799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7409c62f-4559-4b58-9abe-58b34486fa7c", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900 became leader
	I1002 11:55:56.759621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900!
	
	* 
	* ==> storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] <==
	* I1002 11:55:08.425436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 11:55:38.446943       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wk2c7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7: exit status 1 (70.041635ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wk2c7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487027 -n embed-certs-487027
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:08:44.148241061 +0000 UTC m=+5574.719927110
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-487027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-487027 logs -n 25: (1.630574821s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo cat                              | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:14.045882  384965 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:14.045995  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046005  384965 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:14.046009  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046207  384965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:50:14.046807  384965 out.go:303] Setting JSON to false
	I1002 11:50:14.047867  384965 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9160,"bootTime":1696238254,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:50:14.047937  384965 start.go:138] virtualization: kvm guest
	I1002 11:50:14.050148  384965 out.go:177] * [default-k8s-diff-port-777999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:50:14.051736  384965 notify.go:220] Checking for updates...
	I1002 11:50:14.051738  384965 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:14.053419  384965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:14.055001  384965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:50:14.056531  384965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:50:14.057828  384965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:50:14.059154  384965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:14.060884  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:50:14.061318  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.061365  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.077285  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1002 11:50:14.077670  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.078164  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.078184  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.078590  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.078766  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.079011  384965 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:14.079285  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.079321  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.093519  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 11:50:14.093897  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.094331  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.094375  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.094689  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.094875  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.127852  384965 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:50:14.129579  384965 start.go:298] selected driver: kvm2
	I1002 11:50:14.129589  384965 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.129734  384965 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:14.130441  384965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.130517  384965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:50:14.145313  384965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:50:14.145678  384965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:14.145737  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:50:14.145747  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:50:14.145754  384965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-77799
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.145885  384965 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.147697  384965 out.go:177] * Starting control plane node default-k8s-diff-port-777999 in cluster default-k8s-diff-port-777999
	I1002 11:50:14.518571  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:14.149188  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:50:14.149229  384965 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:50:14.149243  384965 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:14.149342  384965 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:50:14.149355  384965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:50:14.149469  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:50:14.149690  384965 start.go:365] acquiring machines lock for default-k8s-diff-port-777999: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:50:17.590603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:23.670608  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:26.742637  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:32.822640  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:35.894704  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:41.974682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:45.046703  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:51.126633  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:54.198624  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:00.278622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:03.350650  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:09.430627  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:12.502639  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:18.582668  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:21.654622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:27.734588  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:30.806674  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:36.886711  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:39.958677  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:46.038638  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:49.110583  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:55.190669  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:58.262632  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:04.342658  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:07.414733  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:13.494648  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:16.566610  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:22.646664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:25.718682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:31.798673  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:34.870620  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:40.950664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:44.022695  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:50.102629  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:53.174698  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:59.254603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:02.326684  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:08.406661  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:11.478769  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:17.558670  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:20.630696  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:26.710600  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:29.782676  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:35.862655  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:38.867149  384505 start.go:369] acquired machines lock for "old-k8s-version-749860" in 4m24.621828644s
	I1002 11:53:38.867251  384505 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:38.867260  384505 fix.go:54] fixHost starting: 
	I1002 11:53:38.867725  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:38.867761  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:38.882900  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1002 11:53:38.883484  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:38.883950  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:53:38.883974  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:38.884318  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:38.884530  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:38.884688  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:53:38.886067  384505 fix.go:102] recreateIfNeeded on old-k8s-version-749860: state=Stopped err=<nil>
	I1002 11:53:38.886102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	W1002 11:53:38.886288  384505 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:38.888401  384505 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-749860" ...
	I1002 11:53:38.889752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Start
	I1002 11:53:38.889924  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring networks are active...
	I1002 11:53:38.890638  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network default is active
	I1002 11:53:38.890980  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network mk-old-k8s-version-749860 is active
	I1002 11:53:38.891314  384505 main.go:141] libmachine: (old-k8s-version-749860) Getting domain xml...
	I1002 11:53:38.892257  384505 main.go:141] libmachine: (old-k8s-version-749860) Creating domain...
	I1002 11:53:38.864675  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:38.864716  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:53:38.866979  384344 machine.go:91] provisioned docker machine in 4m37.398507067s
	I1002 11:53:38.867033  384344 fix.go:56] fixHost completed within 4m37.419547722s
	I1002 11:53:38.867039  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 4m37.419568347s
	W1002 11:53:38.867080  384344 start.go:688] error starting host: provision: host is not running
	W1002 11:53:38.867230  384344 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:53:38.867240  384344 start.go:703] Will try again in 5 seconds ...
	I1002 11:53:40.120018  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting to get IP...
	I1002 11:53:40.120927  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.121258  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.121366  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.121241  385500 retry.go:31] will retry after 204.223254ms: waiting for machine to come up
	I1002 11:53:40.326895  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.327332  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.327351  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.327293  385500 retry.go:31] will retry after 300.58131ms: waiting for machine to come up
	I1002 11:53:40.629931  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.630293  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.630324  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.630247  385500 retry.go:31] will retry after 460.804681ms: waiting for machine to come up
	I1002 11:53:41.092440  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.092887  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.092914  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.092838  385500 retry.go:31] will retry after 573.592817ms: waiting for machine to come up
	I1002 11:53:41.668507  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.668916  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.668955  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.668879  385500 retry.go:31] will retry after 647.261387ms: waiting for machine to come up
	I1002 11:53:42.317738  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.318193  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.318228  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.318135  385500 retry.go:31] will retry after 643.115699ms: waiting for machine to come up
	I1002 11:53:42.963169  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.963572  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.963595  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.963517  385500 retry.go:31] will retry after 1.059074571s: waiting for machine to come up
	I1002 11:53:44.024372  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:44.024750  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:44.024785  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:44.024703  385500 retry.go:31] will retry after 1.142402067s: waiting for machine to come up
	I1002 11:53:43.868857  384344 start.go:365] acquiring machines lock for no-preload-304121: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:53:45.169146  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:45.169470  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:45.169509  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:45.169430  385500 retry.go:31] will retry after 1.244757741s: waiting for machine to come up
	I1002 11:53:46.415640  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:46.416049  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:46.416078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:46.416030  385500 retry.go:31] will retry after 2.066150597s: waiting for machine to come up
	I1002 11:53:48.483477  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:48.483998  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:48.484023  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:48.483921  385500 retry.go:31] will retry after 2.521584671s: waiting for machine to come up
	I1002 11:53:51.008090  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:51.008535  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:51.008565  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:51.008455  385500 retry.go:31] will retry after 2.896131667s: waiting for machine to come up
	I1002 11:53:53.905835  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:53.906274  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:53.906309  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:53.906207  385500 retry.go:31] will retry after 3.463250216s: waiting for machine to come up
	I1002 11:53:58.755219  384787 start.go:369] acquired machines lock for "embed-certs-487027" in 4m10.971064405s
	I1002 11:53:58.755286  384787 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:58.755301  384787 fix.go:54] fixHost starting: 
	I1002 11:53:58.755691  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:58.755733  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:58.772186  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38267
	I1002 11:53:58.772591  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:58.773071  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:53:58.773101  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:58.773409  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:58.773585  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:53:58.773710  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:53:58.775231  384787 fix.go:102] recreateIfNeeded on embed-certs-487027: state=Stopped err=<nil>
	I1002 11:53:58.775273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	W1002 11:53:58.775449  384787 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:58.778132  384787 out.go:177] * Restarting existing kvm2 VM for "embed-certs-487027" ...
	I1002 11:53:57.373844  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374176  384505 main.go:141] libmachine: (old-k8s-version-749860) Found IP for machine: 192.168.83.82
	I1002 11:53:57.374195  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserving static IP address...
	I1002 11:53:57.374208  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has current primary IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374680  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.374711  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | skip adding static IP to network mk-old-k8s-version-749860 - found existing host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"}
	I1002 11:53:57.374725  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserved static IP address: 192.168.83.82
	I1002 11:53:57.374741  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting for SSH to be available...
	I1002 11:53:57.374758  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Getting to WaitForSSH function...
	I1002 11:53:57.377368  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377757  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.377791  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377890  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH client type: external
	I1002 11:53:57.377933  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa (-rw-------)
	I1002 11:53:57.377976  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:53:57.377995  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | About to run SSH command:
	I1002 11:53:57.378008  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | exit 0
	I1002 11:53:57.474496  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | SSH cmd err, output: <nil>: 
	I1002 11:53:57.474881  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetConfigRaw
	I1002 11:53:57.475581  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.478078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478423  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.478464  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478679  384505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:53:57.478876  384505 machine.go:88] provisioning docker machine ...
	I1002 11:53:57.478895  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:57.479118  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479286  384505 buildroot.go:166] provisioning hostname "old-k8s-version-749860"
	I1002 11:53:57.479300  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479509  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.481462  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481768  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.481805  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481935  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.482138  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482280  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.482611  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.483038  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.483051  384505 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-749860 && echo "old-k8s-version-749860" | sudo tee /etc/hostname
	I1002 11:53:57.622724  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-749860
	
	I1002 11:53:57.622760  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.626222  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626663  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.626707  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626840  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.627102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627297  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627513  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.627678  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.628068  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.628089  384505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-749860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-749860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-749860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:53:57.767587  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:57.767664  384505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:53:57.767708  384505 buildroot.go:174] setting up certificates
	I1002 11:53:57.767721  384505 provision.go:83] configureAuth start
	I1002 11:53:57.767734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.768045  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.771158  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771591  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.771620  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771825  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.774031  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774444  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.774523  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774529  384505 provision.go:138] copyHostCerts
	I1002 11:53:57.774608  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:53:57.774623  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:53:57.774695  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:53:57.774787  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:53:57.774797  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:53:57.774821  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:53:57.774884  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:53:57.774891  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:53:57.774912  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:53:57.774970  384505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-749860 san=[192.168.83.82 192.168.83.82 localhost 127.0.0.1 minikube old-k8s-version-749860]
	I1002 11:53:58.003098  384505 provision.go:172] copyRemoteCerts
	I1002 11:53:58.003163  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:53:58.003190  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.005944  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006310  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.006345  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006482  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.006734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.006887  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.007049  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.099927  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:53:58.123424  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:53:58.147578  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:53:58.171190  384505 provision.go:86] duration metric: configureAuth took 403.448571ms
	I1002 11:53:58.171228  384505 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:53:58.171440  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:53:58.171575  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.174314  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174684  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.174723  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174860  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.175078  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175274  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175409  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.175596  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.175908  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.175923  384505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:53:58.491028  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:53:58.491062  384505 machine.go:91] provisioned docker machine in 1.012168334s
	I1002 11:53:58.491072  384505 start.go:300] post-start starting for "old-k8s-version-749860" (driver="kvm2")
	I1002 11:53:58.491085  384505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:53:58.491106  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.491521  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:53:58.491558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.494009  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494382  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.494415  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494546  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.494753  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.494903  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.495037  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.588465  384505 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:53:58.592844  384505 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:53:58.592872  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:53:58.592940  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:53:58.593047  384505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:53:58.593171  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:53:58.601583  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:53:58.624453  384505 start.go:303] post-start completed in 133.365398ms
	I1002 11:53:58.624486  384505 fix.go:56] fixHost completed within 19.757224844s
	I1002 11:53:58.624511  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.627104  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627476  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.627534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627695  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.627913  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628105  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628253  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.628426  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.628749  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.628762  384505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:53:58.755032  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247638.703145377
	
	I1002 11:53:58.755056  384505 fix.go:206] guest clock: 1696247638.703145377
	I1002 11:53:58.755066  384505 fix.go:219] Guest: 2023-10-02 11:53:58.703145377 +0000 UTC Remote: 2023-10-02 11:53:58.624490602 +0000 UTC m=+284.515069275 (delta=78.654775ms)
	I1002 11:53:58.755092  384505 fix.go:190] guest clock delta is within tolerance: 78.654775ms
	I1002 11:53:58.755098  384505 start.go:83] releasing machines lock for "old-k8s-version-749860", held for 19.887910329s
	I1002 11:53:58.755126  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.755438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:58.758172  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758431  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.758467  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758673  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759288  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759466  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759560  384505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:53:58.759620  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.759717  384505 ssh_runner.go:195] Run: cat /version.json
	I1002 11:53:58.759748  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.762471  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762618  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762847  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762879  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762911  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762943  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.763162  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763185  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763347  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763363  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763487  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763661  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.763671  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763828  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.880436  384505 ssh_runner.go:195] Run: systemctl --version
	I1002 11:53:58.886540  384505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:53:59.035347  384505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:53:59.041510  384505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:53:59.041604  384505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:53:59.056030  384505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:53:59.056062  384505 start.go:469] detecting cgroup driver to use...
	I1002 11:53:59.056147  384505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:53:59.068680  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:53:59.080770  384505 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:53:59.080823  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:53:59.093059  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:53:59.106603  384505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:53:59.223135  384505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:53:59.364085  384505 docker.go:213] disabling docker service ...
	I1002 11:53:59.364161  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:53:59.378131  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:53:59.390380  384505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:53:59.522236  384505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:53:59.663336  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:53:59.677221  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:53:59.694283  384505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:53:59.694380  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.703409  384505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:53:59.703481  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.712316  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.721255  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.731204  384505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:53:59.741152  384505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:53:59.748978  384505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:53:59.749036  384505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:53:59.761692  384505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:53:59.770571  384505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:53:59.882809  384505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:00.046741  384505 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:00.046843  384505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:00.051911  384505 start.go:537] Will wait 60s for crictl version
	I1002 11:54:00.051988  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:00.055847  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:00.099999  384505 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:00.100084  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.155271  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.202213  384505 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1002 11:53:58.780030  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Start
	I1002 11:53:58.780201  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring networks are active...
	I1002 11:53:58.780857  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network default is active
	I1002 11:53:58.781206  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network mk-embed-certs-487027 is active
	I1002 11:53:58.781581  384787 main.go:141] libmachine: (embed-certs-487027) Getting domain xml...
	I1002 11:53:58.782269  384787 main.go:141] libmachine: (embed-certs-487027) Creating domain...
	I1002 11:54:00.079808  384787 main.go:141] libmachine: (embed-certs-487027) Waiting to get IP...
	I1002 11:54:00.080676  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.081052  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.081202  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.081070  385615 retry.go:31] will retry after 291.88616ms: waiting for machine to come up
	I1002 11:54:00.374941  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.375493  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.375526  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.375441  385615 retry.go:31] will retry after 315.924643ms: waiting for machine to come up
	I1002 11:54:00.693196  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.693804  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.693840  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.693754  385615 retry.go:31] will retry after 473.967353ms: waiting for machine to come up
	I1002 11:54:01.169616  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.170137  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.170168  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.170099  385615 retry.go:31] will retry after 490.884713ms: waiting for machine to come up
	I1002 11:54:01.662881  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.663427  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.663459  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.663380  385615 retry.go:31] will retry after 590.285109ms: waiting for machine to come up
	I1002 11:54:02.255409  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.256020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.256048  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.255956  385615 retry.go:31] will retry after 586.734935ms: waiting for machine to come up
	I1002 11:54:00.203709  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:54:00.206822  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207269  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:54:00.207308  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207533  384505 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:00.211596  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:00.224503  384505 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:54:00.224558  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:00.267915  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:00.267986  384505 ssh_runner.go:195] Run: which lz4
	I1002 11:54:00.272086  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:00.276281  384505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:00.276322  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1002 11:54:02.169153  384505 crio.go:444] Took 1.897111 seconds to copy over tarball
	I1002 11:54:02.169248  384505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:02.844615  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.845091  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.845129  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.845049  385615 retry.go:31] will retry after 765.906555ms: waiting for machine to come up
	I1002 11:54:03.612904  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:03.613374  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:03.613515  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:03.613306  385615 retry.go:31] will retry after 1.240249135s: waiting for machine to come up
	I1002 11:54:04.855370  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:04.855832  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:04.855858  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:04.855785  385615 retry.go:31] will retry after 1.741253702s: waiting for machine to come up
	I1002 11:54:06.599800  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:06.600279  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:06.600307  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:06.600221  385615 retry.go:31] will retry after 1.945988456s: waiting for machine to come up
	I1002 11:54:05.257359  384505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088072266s)
	I1002 11:54:05.257395  384505 crio.go:451] Took 3.088214 seconds to extract the tarball
	I1002 11:54:05.257408  384505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:05.296693  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:05.347131  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:05.347156  384505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:54:05.347231  384505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.347239  384505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.347291  384505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.347523  384505 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.347545  384505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.347590  384505 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:54:05.347712  384505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.347797  384505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349061  384505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.349109  384505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.349136  384505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.349165  384505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349072  384505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.349076  384505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.349075  384505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.349490  384505 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:54:05.494581  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.497665  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.499676  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.503426  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 11:54:05.504502  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.507776  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.511534  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.589967  384505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1002 11:54:05.590038  384505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.590101  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.653382  384505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1002 11:54:05.653450  384505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.653539  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674391  384505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1002 11:54:05.674430  384505 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 11:54:05.674447  384505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.674467  384505 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1002 11:54:05.674508  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674498  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674583  384505 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1002 11:54:05.674621  384505 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.674671  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.676359  384505 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1002 11:54:05.676390  384505 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.676425  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.680824  384505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1002 11:54:05.680858  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.680871  384505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.680894  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.680905  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.682827  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1002 11:54:05.690404  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.690496  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.690562  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.810224  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1002 11:54:05.840439  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1002 11:54:05.840472  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.840535  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:54:05.840544  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1002 11:54:05.840583  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1002 11:54:05.840643  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.840663  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1002 11:54:05.874997  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1002 11:54:05.875049  384505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1002 11:54:05.875079  384505 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.875136  384505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1002 11:54:06.317119  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:07.926701  384505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.609537315s)
	I1002 11:54:07.926715  384505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.051548545s)
	I1002 11:54:07.926786  384505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 11:54:07.926855  384505 cache_images.go:92] LoadImages completed in 2.579686998s
	W1002 11:54:07.926953  384505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1002 11:54:07.927077  384505 ssh_runner.go:195] Run: crio config
	I1002 11:54:07.991410  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:07.991433  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:07.991452  384505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:07.991473  384505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.82 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-749860 NodeName:old-k8s-version-749860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:54:07.991665  384505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-749860"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-749860
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.82:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:07.991752  384505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-749860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:07.991814  384505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1002 11:54:08.002239  384505 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:08.002313  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:08.012375  384505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1002 11:54:08.031554  384505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:08.050801  384505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1002 11:54:08.068326  384505 ssh_runner.go:195] Run: grep 192.168.83.82	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:08.072798  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:08.085261  384505 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860 for IP: 192.168.83.82
	I1002 11:54:08.085320  384505 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:08.085511  384505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:08.085555  384505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:08.085682  384505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/client.key
	I1002 11:54:08.085771  384505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key.bc78c23c
	I1002 11:54:08.085823  384505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key
	I1002 11:54:08.085973  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:08.086020  384505 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:08.086035  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:08.086071  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:08.086101  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:08.086163  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:08.086237  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:08.087038  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:08.111230  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:08.133515  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:08.157382  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:08.180186  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:08.210075  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:08.232068  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:08.253873  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:08.276866  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:08.300064  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:08.322265  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:08.346808  384505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:08.367194  384505 ssh_runner.go:195] Run: openssl version
	I1002 11:54:08.374709  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:08.389274  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395338  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395420  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.401338  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:08.412228  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:08.423293  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428146  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428213  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.434177  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:08.449342  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:08.463678  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468723  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468795  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.476711  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:08.492116  384505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:08.498510  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:08.504961  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:08.513012  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:08.520620  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:08.528578  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:08.534685  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:08.541262  384505 kubeadm.go:404] StartCluster: {Name:old-k8s-version-749860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:08.541401  384505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:08.541474  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:08.579821  384505 cri.go:89] found id: ""
	I1002 11:54:08.579899  384505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:08.590328  384505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:08.590359  384505 kubeadm.go:636] restartCluster start
	I1002 11:54:08.590419  384505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:08.600034  384505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.601660  384505 kubeconfig.go:92] found "old-k8s-version-749860" server: "https://192.168.83.82:8443"
	I1002 11:54:08.605641  384505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:08.615274  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.615340  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.630952  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.630979  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.631032  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.642433  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.547687  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:08.548295  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:08.548331  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:08.548238  385615 retry.go:31] will retry after 2.817726625s: waiting for machine to come up
	I1002 11:54:11.367346  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:11.367909  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:11.367943  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:11.367859  385615 retry.go:31] will retry after 3.066326625s: waiting for machine to come up
	I1002 11:54:09.142569  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.155937  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:09.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.642637  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.655230  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.142683  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.142769  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.155206  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.642757  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.642857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.659345  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.142860  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.142955  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.158336  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.642849  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.642934  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.658819  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.143538  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.143645  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.159984  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.642679  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.658031  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.143496  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.159279  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.643567  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.643659  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.657189  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.435299  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:14.435744  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:14.435777  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:14.435699  385615 retry.go:31] will retry after 3.446313194s: waiting for machine to come up
	I1002 11:54:19.007568  384965 start.go:369] acquired machines lock for "default-k8s-diff-port-777999" in 4m4.857829673s
	I1002 11:54:19.007726  384965 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:19.007735  384965 fix.go:54] fixHost starting: 
	I1002 11:54:19.008181  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:19.008225  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:19.025286  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1002 11:54:19.025755  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:19.026243  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:54:19.026265  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:19.026648  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:19.026869  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:19.027056  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:54:19.028773  384965 fix.go:102] recreateIfNeeded on default-k8s-diff-port-777999: state=Stopped err=<nil>
	I1002 11:54:19.028799  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	W1002 11:54:19.028984  384965 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:19.031466  384965 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-777999" ...
	I1002 11:54:19.033140  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Start
	I1002 11:54:19.033346  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring networks are active...
	I1002 11:54:19.034009  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network default is active
	I1002 11:54:19.034440  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network mk-default-k8s-diff-port-777999 is active
	I1002 11:54:19.034843  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Getting domain xml...
	I1002 11:54:19.035519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Creating domain...
	I1002 11:54:14.142550  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.142618  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.154742  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.643429  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.643522  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.656075  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.142577  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.142669  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.154422  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.643360  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.643450  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.655255  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.142806  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.142948  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.154896  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.643505  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.643581  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.655413  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.142981  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.143087  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.156411  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.642996  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.643100  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.656886  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.143481  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:18.143563  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:18.157184  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.616095  384505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:18.616128  384505 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:18.616142  384505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:18.616204  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:18.654952  384505 cri.go:89] found id: ""
	I1002 11:54:18.655033  384505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:18.674155  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:18.685052  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:18.685116  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695816  384505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695844  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:18.821270  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:17.886333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.886895  384787 main.go:141] libmachine: (embed-certs-487027) Found IP for machine: 192.168.72.147
	I1002 11:54:17.886926  384787 main.go:141] libmachine: (embed-certs-487027) Reserving static IP address...
	I1002 11:54:17.886947  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has current primary IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.887365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.887396  384787 main.go:141] libmachine: (embed-certs-487027) DBG | skip adding static IP to network mk-embed-certs-487027 - found existing host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"}
	I1002 11:54:17.887404  384787 main.go:141] libmachine: (embed-certs-487027) Reserved static IP address: 192.168.72.147
	I1002 11:54:17.887420  384787 main.go:141] libmachine: (embed-certs-487027) Waiting for SSH to be available...
	I1002 11:54:17.887437  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Getting to WaitForSSH function...
	I1002 11:54:17.889775  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890175  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.890214  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890410  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH client type: external
	I1002 11:54:17.890434  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa (-rw-------)
	I1002 11:54:17.890470  384787 main.go:141] libmachine: (embed-certs-487027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:17.890502  384787 main.go:141] libmachine: (embed-certs-487027) DBG | About to run SSH command:
	I1002 11:54:17.890514  384787 main.go:141] libmachine: (embed-certs-487027) DBG | exit 0
	I1002 11:54:17.974015  384787 main.go:141] libmachine: (embed-certs-487027) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:17.974444  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetConfigRaw
	I1002 11:54:17.975209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:17.977468  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.977798  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.977837  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.978016  384787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:54:17.978201  384787 machine.go:88] provisioning docker machine ...
	I1002 11:54:17.978220  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:17.978460  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978651  384787 buildroot.go:166] provisioning hostname "embed-certs-487027"
	I1002 11:54:17.978669  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:17.980872  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981298  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.981333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981395  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:17.981587  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981746  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981885  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:17.982020  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:17.982399  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:17.982413  384787 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-487027 && echo "embed-certs-487027" | sudo tee /etc/hostname
	I1002 11:54:18.103274  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-487027
	
	I1002 11:54:18.103311  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.106230  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106654  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.106709  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106847  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.107082  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107266  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107400  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.107589  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.108051  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.108081  384787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-487027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-487027/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-487027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:18.222398  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:18.222431  384787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:18.222453  384787 buildroot.go:174] setting up certificates
	I1002 11:54:18.222488  384787 provision.go:83] configureAuth start
	I1002 11:54:18.222500  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:18.222817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:18.225631  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.226150  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226262  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.228719  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229096  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.229130  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229268  384787 provision.go:138] copyHostCerts
	I1002 11:54:18.229336  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:18.229351  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:18.229399  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:18.229480  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:18.229492  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:18.229511  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:18.229563  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:18.229570  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:18.229586  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:18.229630  384787 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-487027 san=[192.168.72.147 192.168.72.147 localhost 127.0.0.1 minikube embed-certs-487027]
	I1002 11:54:18.296130  384787 provision.go:172] copyRemoteCerts
	I1002 11:54:18.296187  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:18.296212  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.298721  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299036  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.299059  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299181  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.299363  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.299479  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.299628  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.384449  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:18.406096  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:18.427407  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 11:54:18.448829  384787 provision.go:86] duration metric: configureAuth took 226.314252ms
	I1002 11:54:18.448858  384787 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:18.449065  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:18.449178  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.451995  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.452405  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452596  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.452786  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.452958  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.453077  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.453213  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.453571  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.453606  384787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:18.754879  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:18.754913  384787 machine.go:91] provisioned docker machine in 776.69782ms
	I1002 11:54:18.754927  384787 start.go:300] post-start starting for "embed-certs-487027" (driver="kvm2")
	I1002 11:54:18.754941  384787 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:18.754966  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:18.755361  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:18.755392  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.758184  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758644  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.758700  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758788  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.758981  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.759149  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.759414  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.847614  384787 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:18.851792  384787 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:18.851821  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:18.851911  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:18.852023  384787 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:18.852152  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:18.861415  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:18.883190  384787 start.go:303] post-start completed in 128.242372ms
	I1002 11:54:18.883222  384787 fix.go:56] fixHost completed within 20.127922888s
	I1002 11:54:18.883249  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.885771  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.886141  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886335  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.886598  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886784  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886922  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.887111  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.887556  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.887574  384787 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:19.007352  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247658.948838951
	
	I1002 11:54:19.007388  384787 fix.go:206] guest clock: 1696247658.948838951
	I1002 11:54:19.007404  384787 fix.go:219] Guest: 2023-10-02 11:54:18.948838951 +0000 UTC Remote: 2023-10-02 11:54:18.883226893 +0000 UTC m=+271.237550126 (delta=65.612058ms)
	I1002 11:54:19.007464  384787 fix.go:190] guest clock delta is within tolerance: 65.612058ms
	I1002 11:54:19.007471  384787 start.go:83] releasing machines lock for "embed-certs-487027", held for 20.25221392s
	I1002 11:54:19.007510  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.007831  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:19.011020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011386  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.011418  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011602  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012303  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012520  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012602  384787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:19.012660  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.012946  384787 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:19.012976  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.015652  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.015935  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016016  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016063  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016284  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016411  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016439  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016482  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016638  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016653  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.016868  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016871  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.017017  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.017199  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.124634  384787 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:19.130340  384787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:19.278814  384787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:19.284549  384787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:19.284618  384787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:19.300872  384787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:19.300896  384787 start.go:469] detecting cgroup driver to use...
	I1002 11:54:19.300984  384787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:19.314898  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:19.327762  384787 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:19.327826  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:19.341164  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:19.354542  384787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:19.469125  384787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:19.581195  384787 docker.go:213] disabling docker service ...
	I1002 11:54:19.581260  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:19.595222  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:19.607587  384787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:19.725376  384787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:19.828507  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:19.845782  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:19.868464  384787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:19.868530  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.881554  384787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:19.881633  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.894090  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.905922  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.918336  384787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:19.931259  384787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:19.939861  384787 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:19.939925  384787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:19.954089  384787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:19.966438  384787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:20.124666  384787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:20.329505  384787 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:20.329602  384787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:20.336428  384787 start.go:537] Will wait 60s for crictl version
	I1002 11:54:20.336499  384787 ssh_runner.go:195] Run: which crictl
	I1002 11:54:20.343269  384787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:20.386249  384787 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:20.386331  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.429634  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.476699  384787 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:20.478035  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:20.480720  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481028  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:20.481054  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481230  384787 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:20.485387  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:20.496957  384787 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:20.497028  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:20.539655  384787 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:20.539731  384787 ssh_runner.go:195] Run: which lz4
	I1002 11:54:20.543869  384787 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:20.548080  384787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:20.548112  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:22.411067  384787 crio.go:444] Took 1.867223 seconds to copy over tarball
	I1002 11:54:22.411155  384787 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:20.416319  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting to get IP...
	I1002 11:54:20.417168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417613  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.417539  385761 retry.go:31] will retry after 211.341658ms: waiting for machine to come up
	I1002 11:54:20.631097  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.631841  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.632011  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.631972  385761 retry.go:31] will retry after 257.651992ms: waiting for machine to come up
	I1002 11:54:20.891519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892077  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892111  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.892047  385761 retry.go:31] will retry after 295.599576ms: waiting for machine to come up
	I1002 11:54:21.189739  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190333  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190389  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.190275  385761 retry.go:31] will retry after 532.182463ms: waiting for machine to come up
	I1002 11:54:21.723822  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724414  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724443  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.724314  385761 retry.go:31] will retry after 576.235756ms: waiting for machine to come up
	I1002 11:54:22.301975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302566  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302600  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:22.302479  385761 retry.go:31] will retry after 913.441142ms: waiting for machine to come up
	I1002 11:54:23.217419  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217905  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217943  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:23.217839  385761 retry.go:31] will retry after 1.089960204s: waiting for machine to come up
	I1002 11:54:19.625761  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.857853  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.977490  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:20.080170  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:20.080294  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.097093  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.611090  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.110857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.610499  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.111420  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.138171  384505 api_server.go:72] duration metric: took 2.057999603s to wait for apiserver process to appear ...
	I1002 11:54:22.138201  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:22.138224  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:25.604442  384787 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193244457s)
	I1002 11:54:25.604543  384787 crio.go:451] Took 3.193443 seconds to extract the tarball
	I1002 11:54:25.604568  384787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:25.660515  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:25.723308  384787 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:25.723339  384787 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:25.723436  384787 ssh_runner.go:195] Run: crio config
	I1002 11:54:25.781690  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:25.781722  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:25.781748  384787 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:25.781775  384787 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-487027 NodeName:embed-certs-487027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:25.782020  384787 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-487027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:25.782125  384787 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-487027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:25.782183  384787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:25.791322  384787 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:25.791398  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:25.799709  384787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 11:54:25.818900  384787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:25.836913  384787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1002 11:54:25.856201  384787 ssh_runner.go:195] Run: grep 192.168.72.147	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:25.859962  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:25.872776  384787 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027 for IP: 192.168.72.147
	I1002 11:54:25.872818  384787 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:25.873061  384787 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:25.873125  384787 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:25.873225  384787 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/client.key
	I1002 11:54:25.873312  384787 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key.b24df18b
	I1002 11:54:25.873375  384787 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key
	I1002 11:54:25.873530  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:25.873590  384787 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:25.873602  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:25.873633  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:25.873667  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:25.873702  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:25.873757  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:25.874732  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:25.901588  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:25.929381  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:25.955358  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:25.980414  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:26.008652  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:26.038061  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:26.067828  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:26.098717  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:26.131030  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:26.162989  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:26.189458  384787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:26.206791  384787 ssh_runner.go:195] Run: openssl version
	I1002 11:54:26.214436  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:26.226064  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231428  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231504  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.238070  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:26.252779  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:26.267263  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272245  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272316  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.278088  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:26.289430  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:26.300788  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305731  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305812  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.311712  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:26.322855  384787 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:26.328688  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:26.336570  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:26.344412  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:26.350583  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:26.356815  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:26.364674  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:26.372219  384787 kubeadm.go:404] StartCluster: {Name:embed-certs-487027 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:26.372341  384787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:26.372397  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:26.424018  384787 cri.go:89] found id: ""
	I1002 11:54:26.424131  384787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:26.435493  384787 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:26.435520  384787 kubeadm.go:636] restartCluster start
	I1002 11:54:26.435583  384787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:26.447429  384787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.448848  384787 kubeconfig.go:92] found "embed-certs-487027" server: "https://192.168.72.147:8443"
	I1002 11:54:26.452474  384787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:26.462854  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.462924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.475723  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.475751  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.475803  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.488962  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.989693  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.989776  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.002889  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:27.489487  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.489589  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.503912  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:24.308867  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309362  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:24.309326  385761 retry.go:31] will retry after 1.381170872s: waiting for machine to come up
	I1002 11:54:25.691931  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692285  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692386  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:25.692267  385761 retry.go:31] will retry after 1.748966707s: waiting for machine to come up
	I1002 11:54:27.442708  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443145  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443171  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:27.443107  385761 retry.go:31] will retry after 2.105420589s: waiting for machine to come up
	I1002 11:54:27.138701  384505 api_server.go:269] stopped: https://192.168.83.82:8443/healthz: Get "https://192.168.83.82:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 11:54:27.138757  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.249499  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:28.249540  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:28.750389  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.756351  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:28.756390  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.250308  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.257228  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:29.257264  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.750123  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.758475  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 11:54:29.769049  384505 api_server.go:141] control plane version: v1.16.0
	I1002 11:54:29.769079  384505 api_server.go:131] duration metric: took 7.630868963s to wait for apiserver health ...
	I1002 11:54:29.769098  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:29.769107  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:29.770969  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:27.989735  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.989861  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.007059  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.489495  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.489605  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.505845  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.989879  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.989963  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.004220  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.489847  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.489949  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.502986  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.989170  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.989264  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.006850  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.489389  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.489504  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.502094  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.989302  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.989399  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.005902  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.489967  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.490080  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.503748  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.989317  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.989405  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.003288  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:32.489803  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.489924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.506744  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.550027  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550550  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:29.550488  385761 retry.go:31] will retry after 2.509962026s: waiting for machine to come up
	I1002 11:54:32.063392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063862  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063887  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:32.063834  385761 retry.go:31] will retry after 2.845339865s: waiting for machine to come up
	I1002 11:54:29.772611  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:29.786551  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:29.807894  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:29.818837  384505 system_pods.go:59] 7 kube-system pods found
	I1002 11:54:29.818890  384505 system_pods.go:61] "coredns-5644d7b6d9-9xdpq" [2d10c772-e2f0-4bfc-9795-0721f8bab31c] Running
	I1002 11:54:29.818901  384505 system_pods.go:61] "etcd-old-k8s-version-749860" [5826895a-f14d-43ab-9f22-edad964d4a8e] Running
	I1002 11:54:29.818910  384505 system_pods.go:61] "kube-apiserver-old-k8s-version-749860" [3418ba32-aa28-4587-a231-b1f218181e71] Running
	I1002 11:54:29.818919  384505 system_pods.go:61] "kube-controller-manager-old-k8s-version-749860" [e42ff4c0-2ec4-45b9-8189-6a225c79f5c6] Running
	I1002 11:54:29.818927  384505 system_pods.go:61] "kube-proxy-gkhxb" [b3675678-e1cf-4d86-82d9-9e068bd1ba19] Running
	I1002 11:54:29.818939  384505 system_pods.go:61] "kube-scheduler-old-k8s-version-749860" [53a1c8a7-ec6d-4d47-a980-8cfab71ad467] Running
	I1002 11:54:29.818948  384505 system_pods.go:61] "storage-provisioner" [e73d6f24-1392-40ca-b37d-03c035734d1d] Running
	I1002 11:54:29.818964  384505 system_pods.go:74] duration metric: took 11.044895ms to wait for pod list to return data ...
	I1002 11:54:29.818980  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:29.822392  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:29.822455  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:29.822472  384505 node_conditions.go:105] duration metric: took 3.48317ms to run NodePressure ...
	I1002 11:54:29.822520  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:30.106960  384505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:30.111692  384505 retry.go:31] will retry after 218.727225ms: kubelet not initialised
	I1002 11:54:30.336456  384505 retry.go:31] will retry after 524.868139ms: kubelet not initialised
	I1002 11:54:30.867554  384505 retry.go:31] will retry after 427.897694ms: kubelet not initialised
	I1002 11:54:31.301616  384505 retry.go:31] will retry after 722.780158ms: kubelet not initialised
	I1002 11:54:32.029512  384505 retry.go:31] will retry after 1.205429819s: kubelet not initialised
	I1002 11:54:33.253735  384505 retry.go:31] will retry after 1.476521325s: kubelet not initialised
	I1002 11:54:32.989607  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.989718  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.004745  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.489141  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.489215  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.506018  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.990120  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.990217  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.005050  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.489520  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.489608  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.501965  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.989481  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.989584  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.002635  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.489123  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.489199  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.502995  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.989474  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.989565  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:36.003010  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:36.463582  384787 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:36.463614  384787 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:36.463628  384787 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:36.463689  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:36.503915  384787 cri.go:89] found id: ""
	I1002 11:54:36.503982  384787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:36.519603  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:36.529026  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:36.529086  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538424  384787 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538451  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:36.670492  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:34.910513  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911092  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:34.911030  385761 retry.go:31] will retry after 3.250805502s: waiting for machine to come up
	I1002 11:54:38.163585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Found IP for machine: 192.168.61.251
	I1002 11:54:38.164104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has current primary IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164124  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserving static IP address...
	I1002 11:54:38.164549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.164588  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | skip adding static IP to network mk-default-k8s-diff-port-777999 - found existing host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"}
	I1002 11:54:38.164604  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserved static IP address: 192.168.61.251
	I1002 11:54:38.164623  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for SSH to be available...
	I1002 11:54:38.164639  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Getting to WaitForSSH function...
	I1002 11:54:38.166901  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167279  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.167313  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167579  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH client type: external
	I1002 11:54:38.167610  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa (-rw-------)
	I1002 11:54:38.167649  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:38.167671  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | About to run SSH command:
	I1002 11:54:38.167694  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | exit 0
	I1002 11:54:38.274617  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:38.275081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetConfigRaw
	I1002 11:54:38.275836  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.278750  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279150  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.279193  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279391  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:54:38.279621  384965 machine.go:88] provisioning docker machine ...
	I1002 11:54:38.279646  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:38.279886  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280069  384965 buildroot.go:166] provisioning hostname "default-k8s-diff-port-777999"
	I1002 11:54:38.280094  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280253  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.282736  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.283136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283230  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.283399  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283578  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283733  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.283892  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.284295  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.284312  384965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-777999 && echo "default-k8s-diff-port-777999" | sudo tee /etc/hostname
	I1002 11:54:38.443082  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-777999
	
	I1002 11:54:38.443200  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.446493  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447061  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.447106  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447288  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.447549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447737  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447899  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.448132  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.448554  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.448586  384965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-777999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-777999/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-777999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:38.594884  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:38.594920  384965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:38.594956  384965 buildroot.go:174] setting up certificates
	I1002 11:54:38.594975  384965 provision.go:83] configureAuth start
	I1002 11:54:38.594993  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.595325  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.597718  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598053  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.598088  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598217  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.600751  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.601099  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601219  384965 provision.go:138] copyHostCerts
	I1002 11:54:38.601300  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:38.601316  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:38.601393  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:38.601520  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:38.601534  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:38.601565  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:38.601634  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:38.601644  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:38.601670  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:38.601728  384965 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-777999 san=[192.168.61.251 192.168.61.251 localhost 127.0.0.1 minikube default-k8s-diff-port-777999]
	I1002 11:54:38.706714  384965 provision.go:172] copyRemoteCerts
	I1002 11:54:38.706783  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:38.706847  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.709075  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709491  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.709547  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709658  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.709903  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.710087  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.710216  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:38.803103  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:38.825916  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:38.847881  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 11:54:38.873772  384965 provision.go:86] duration metric: configureAuth took 278.777931ms
	I1002 11:54:38.873804  384965 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:38.874066  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:38.874154  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.876864  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877269  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.877304  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877453  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.877666  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877797  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877936  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.878087  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.878441  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.878469  384965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:34.736594  384505 retry.go:31] will retry after 1.866771295s: kubelet not initialised
	I1002 11:54:36.609977  384505 retry.go:31] will retry after 4.83087592s: kubelet not initialised
	I1002 11:54:39.495298  384344 start.go:369] acquired machines lock for "no-preload-304121" in 55.626389891s
	I1002 11:54:39.495355  384344 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:39.495364  384344 fix.go:54] fixHost starting: 
	I1002 11:54:39.495800  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:39.495839  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:39.518491  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1002 11:54:39.518893  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:39.519407  384344 main.go:141] libmachine: Using API Version  1
	I1002 11:54:39.519432  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:39.519757  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:39.519941  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:39.520099  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:54:39.521857  384344 fix.go:102] recreateIfNeeded on no-preload-304121: state=Stopped err=<nil>
	I1002 11:54:39.521885  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	W1002 11:54:39.522058  384344 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:39.524119  384344 out.go:177] * Restarting existing kvm2 VM for "no-preload-304121" ...
	I1002 11:54:39.215761  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:39.215794  384965 machine.go:91] provisioned docker machine in 936.155542ms
	I1002 11:54:39.215807  384965 start.go:300] post-start starting for "default-k8s-diff-port-777999" (driver="kvm2")
	I1002 11:54:39.215822  384965 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:39.215848  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.216265  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:39.216305  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.219032  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219387  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.219418  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219542  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.219748  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.219910  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.220054  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.317075  384965 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:39.321405  384965 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:39.321429  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:39.321505  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:39.321599  384965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:39.321716  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:39.330980  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:39.357830  384965 start.go:303] post-start completed in 142.005546ms
	I1002 11:54:39.357863  384965 fix.go:56] fixHost completed within 20.350127508s
	I1002 11:54:39.357900  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.360232  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.360598  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360768  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.360966  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361139  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361264  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.361425  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:39.361918  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:39.361939  384965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:39.495129  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247679.435720520
	
	I1002 11:54:39.495155  384965 fix.go:206] guest clock: 1696247679.435720520
	I1002 11:54:39.495166  384965 fix.go:219] Guest: 2023-10-02 11:54:39.43572052 +0000 UTC Remote: 2023-10-02 11:54:39.357871423 +0000 UTC m=+265.343763085 (delta=77.849097ms)
	I1002 11:54:39.495194  384965 fix.go:190] guest clock delta is within tolerance: 77.849097ms
	I1002 11:54:39.495206  384965 start.go:83] releasing machines lock for "default-k8s-diff-port-777999", held for 20.487515438s
	I1002 11:54:39.495242  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.495652  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:39.498667  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499055  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.499114  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499370  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.499891  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500060  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500132  384965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:39.500199  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.500539  384965 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:39.500565  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.503388  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503580  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503885  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.503917  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503995  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504000  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.504081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.504281  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504297  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504682  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504680  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.504825  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.623582  384965 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:39.631181  384965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:39.787298  384965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:39.795202  384965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:39.795303  384965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:39.816471  384965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:39.816495  384965 start.go:469] detecting cgroup driver to use...
	I1002 11:54:39.816567  384965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:39.836594  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:39.852798  384965 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:39.852911  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:39.868676  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:39.885480  384965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:40.003441  384965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:40.146812  384965 docker.go:213] disabling docker service ...
	I1002 11:54:40.146916  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:40.163451  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:40.178327  384965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:40.339579  384965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:40.463502  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:40.476402  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:40.499021  384965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:40.499117  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.511680  384965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:40.511752  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.524364  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.536675  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.549326  384965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:40.559447  384965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:40.570086  384965 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:40.570157  384965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:40.582938  384965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:40.594250  384965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:40.739528  384965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:40.964248  384965 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:40.964336  384965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:40.969637  384965 start.go:537] Will wait 60s for crictl version
	I1002 11:54:40.969696  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:54:40.974270  384965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:41.016986  384965 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:41.017121  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.061313  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.112139  384965 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:39.525634  384344 main.go:141] libmachine: (no-preload-304121) Calling .Start
	I1002 11:54:39.525802  384344 main.go:141] libmachine: (no-preload-304121) Ensuring networks are active...
	I1002 11:54:39.526566  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network default is active
	I1002 11:54:39.526860  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network mk-no-preload-304121 is active
	I1002 11:54:39.527227  384344 main.go:141] libmachine: (no-preload-304121) Getting domain xml...
	I1002 11:54:39.527942  384344 main.go:141] libmachine: (no-preload-304121) Creating domain...
	I1002 11:54:40.973483  384344 main.go:141] libmachine: (no-preload-304121) Waiting to get IP...
	I1002 11:54:40.974731  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:40.975262  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:40.975359  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:40.975266  385933 retry.go:31] will retry after 231.149062ms: waiting for machine to come up
	I1002 11:54:41.207806  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.208486  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.208522  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.208461  385933 retry.go:31] will retry after 390.353931ms: waiting for machine to come up
	I1002 11:54:37.939830  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269286101s)
	I1002 11:54:37.939876  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.149675  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.246179  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.327794  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:38.327884  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.343240  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.855719  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.355428  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.854862  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.355228  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.855597  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.891530  384787 api_server.go:72] duration metric: took 2.563733499s to wait for apiserver process to appear ...
	I1002 11:54:40.891560  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:40.891581  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892226  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:40.892274  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892799  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:41.393747  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:41.113638  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:41.116930  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117360  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:41.117396  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117684  384965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:41.122622  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:41.138418  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:41.138496  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:41.189380  384965 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:41.189465  384965 ssh_runner.go:195] Run: which lz4
	I1002 11:54:41.194945  384965 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:41.200215  384965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:41.200254  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:43.164279  384965 crio.go:444] Took 1.969380 seconds to copy over tarball
	I1002 11:54:43.164370  384965 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:41.447247  384505 retry.go:31] will retry after 8.441231321s: kubelet not initialised
	I1002 11:54:41.600866  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.601691  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.601729  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.601345  385933 retry.go:31] will retry after 381.859851ms: waiting for machine to come up
	I1002 11:54:41.985107  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.986545  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.986572  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.986434  385933 retry.go:31] will retry after 606.51751ms: waiting for machine to come up
	I1002 11:54:42.594443  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:42.595004  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:42.595031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:42.594935  385933 retry.go:31] will retry after 474.689172ms: waiting for machine to come up
	I1002 11:54:43.071618  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:43.072140  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:43.072196  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:43.072085  385933 retry.go:31] will retry after 931.163736ms: waiting for machine to come up
	I1002 11:54:44.005228  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:44.005899  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:44.005927  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:44.005852  385933 retry.go:31] will retry after 1.133426769s: waiting for machine to come up
	I1002 11:54:45.141320  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:45.142068  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:45.142099  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:45.141965  385933 retry.go:31] will retry after 1.458717431s: waiting for machine to come up
	I1002 11:54:45.416658  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.416697  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.416713  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.489874  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.489918  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.893115  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.901437  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:45.901477  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.393114  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.399302  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:46.399337  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.892875  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.898524  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:54:46.908311  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:54:46.908342  384787 api_server.go:131] duration metric: took 6.016772427s to wait for apiserver health ...
	I1002 11:54:46.908354  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.908364  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:47.225292  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:47.481617  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:47.499011  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:47.535238  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:46.620757  384965 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.456345361s)
	I1002 11:54:46.620801  384965 crio.go:451] Took 3.456492 seconds to extract the tarball
	I1002 11:54:46.620814  384965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:46.677550  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:46.810235  384965 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:46.810265  384965 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:46.810334  384965 ssh_runner.go:195] Run: crio config
	I1002 11:54:46.875355  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.875378  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:46.875397  384965 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:46.875417  384965 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.251 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-777999 NodeName:default-k8s-diff-port-777999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:46.875588  384965 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.251
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-777999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:46.875674  384965 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-777999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 11:54:46.875737  384965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:46.886943  384965 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:46.887034  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:46.898434  384965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1002 11:54:46.917830  384965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:46.936297  384965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1002 11:54:46.954413  384965 ssh_runner.go:195] Run: grep 192.168.61.251	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:46.958832  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:46.970802  384965 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999 for IP: 192.168.61.251
	I1002 11:54:46.970845  384965 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:46.971031  384965 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:46.971093  384965 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:46.971194  384965 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/client.key
	I1002 11:54:46.971286  384965 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key.04d51ca9
	I1002 11:54:46.971341  384965 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key
	I1002 11:54:46.971469  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:46.971507  384965 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:46.971524  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:46.971572  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:46.971614  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:46.971652  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:46.971713  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:46.972319  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:46.998880  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:47.024639  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:47.048695  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:47.076815  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:47.102469  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:47.128913  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:47.155863  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:47.185058  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:47.212289  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:47.236848  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:47.261485  384965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:47.278535  384965 ssh_runner.go:195] Run: openssl version
	I1002 11:54:47.284888  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:47.296352  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301262  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301331  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.307136  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:47.317650  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:47.328371  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333341  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333421  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.339268  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:47.349646  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:47.360575  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367279  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367346  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.374693  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:47.386302  384965 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:47.391448  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:47.397407  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:47.403122  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:47.408810  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:47.414684  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:47.420606  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:47.426568  384965 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:47.426702  384965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:47.426747  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:47.467190  384965 cri.go:89] found id: ""
	I1002 11:54:47.467275  384965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:47.478921  384965 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:47.478944  384965 kubeadm.go:636] restartCluster start
	I1002 11:54:47.479016  384965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:47.492971  384965 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.494091  384965 kubeconfig.go:92] found "default-k8s-diff-port-777999" server: "https://192.168.61.251:8444"
	I1002 11:54:47.498738  384965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:47.510376  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.510454  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.523397  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.523417  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.523459  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.536893  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.037653  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.037746  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.055280  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.537887  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.537979  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.555759  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.037998  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.038108  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:46.602496  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:46.654672  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:46.654707  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:46.602962  385933 retry.go:31] will retry after 1.25268648s: waiting for machine to come up
	I1002 11:54:47.857506  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:47.858115  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:47.858149  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:47.858061  385933 retry.go:31] will retry after 2.104571101s: waiting for machine to come up
	I1002 11:54:49.964533  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:49.964997  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:49.965031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:49.964942  385933 retry.go:31] will retry after 2.047553587s: waiting for machine to come up
	I1002 11:54:47.766443  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:54:47.766485  384787 system_pods.go:61] "coredns-5dd5756b68-6glsj" [ad7c852a-cdac-4ada-99da-4115b447f00c] Running
	I1002 11:54:47.766498  384787 system_pods.go:61] "etcd-embed-certs-487027" [78f5c4ed-7baf-4339-811f-c25e934de0c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:54:47.766516  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [275bb65c-b955-43d9-839b-6439e8c19662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:54:47.766524  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [d798407e-abe2-4b70-952e-1274fff006bc] Running
	I1002 11:54:47.766532  384787 system_pods.go:61] "kube-proxy-wjjtv" [54e35e5e-7045-497f-8fef-322fe0e43afd] Running
	I1002 11:54:47.766543  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [62c61cf2-f18e-47a9-9729-20e87fe02c89] Running
	I1002 11:54:47.766556  384787 system_pods.go:61] "metrics-server-57f55c9bc5-d8c7b" [71c33b74-c942-403a-a1d4-2b852f0070a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:54:47.766568  384787 system_pods.go:61] "storage-provisioner" [0a8120e1-c879-4726-abab-f95a4a3c8721] Running
	I1002 11:54:47.766581  384787 system_pods.go:74] duration metric: took 231.314062ms to wait for pod list to return data ...
	I1002 11:54:47.766593  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:48.206673  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:48.206710  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:48.206722  384787 node_conditions.go:105] duration metric: took 440.12142ms to run NodePressure ...
	I1002 11:54:48.206743  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:48.736269  384787 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754061  384787 kubeadm.go:787] kubelet initialised
	I1002 11:54:48.754094  384787 kubeadm.go:788] duration metric: took 17.795803ms waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754106  384787 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:54:48.763480  384787 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:50.815900  384787 pod_ready.go:102] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:51.815729  384787 pod_ready.go:92] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:51.815752  384787 pod_ready.go:81] duration metric: took 3.052241738s waiting for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:51.815761  384787 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:49.055614  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.537412  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.537517  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:49.554838  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.037334  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.037460  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.050213  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.537454  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.537586  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.551733  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.037281  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.037394  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.055077  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.537591  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.537672  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.555315  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.037929  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.038038  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.052852  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.537358  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.537435  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.553169  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.037814  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.037913  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.055176  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.537764  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.537869  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.554864  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.037941  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.038052  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:49.895219  384505 retry.go:31] will retry after 9.020637322s: kubelet not initialised
	I1002 11:54:52.015240  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:52.015623  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:52.015646  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:52.015594  385933 retry.go:31] will retry after 3.361214112s: waiting for machine to come up
	I1002 11:54:55.378293  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:55.378805  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:55.378853  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:55.378772  385933 retry.go:31] will retry after 3.33521217s: waiting for machine to come up
	I1002 11:54:53.337930  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.337967  384787 pod_ready.go:81] duration metric: took 1.522199476s waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.337979  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344756  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.344782  384787 pod_ready.go:81] duration metric: took 6.79552ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344791  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:55.549698  384787 pod_ready.go:102] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:57.049146  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.049177  384787 pod_ready.go:81] duration metric: took 3.704379238s waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.049192  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055125  384787 pod_ready.go:92] pod "kube-proxy-wjjtv" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.055144  384787 pod_ready.go:81] duration metric: took 5.945156ms waiting for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055152  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:54.056234  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.537821  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.537918  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:54.552634  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.037141  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.037220  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.052963  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.537432  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.537531  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.552525  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.036986  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.037074  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.049750  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.537060  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.537144  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.548686  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.037931  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:57.038029  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:57.049828  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.511461  384965 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:57.511495  384965 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:57.511510  384965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:57.511571  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:57.552784  384965 cri.go:89] found id: ""
	I1002 11:54:57.552866  384965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:57.567867  384965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:57.578391  384965 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:57.578474  384965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587065  384965 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587086  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:57.717787  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.423038  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.607300  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.687023  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.778674  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:58.778770  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.794920  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.923574  384505 retry.go:31] will retry after 19.662203801s: kubelet not initialised
	I1002 11:54:58.715622  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716211  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has current primary IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716229  384344 main.go:141] libmachine: (no-preload-304121) Found IP for machine: 192.168.39.143
	I1002 11:54:58.716248  384344 main.go:141] libmachine: (no-preload-304121) Reserving static IP address...
	I1002 11:54:58.716781  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.716823  384344 main.go:141] libmachine: (no-preload-304121) Reserved static IP address: 192.168.39.143
	I1002 11:54:58.716845  384344 main.go:141] libmachine: (no-preload-304121) DBG | skip adding static IP to network mk-no-preload-304121 - found existing host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"}
	I1002 11:54:58.716864  384344 main.go:141] libmachine: (no-preload-304121) DBG | Getting to WaitForSSH function...
	I1002 11:54:58.716875  384344 main.go:141] libmachine: (no-preload-304121) Waiting for SSH to be available...
	I1002 11:54:58.719551  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.719991  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.720031  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.720236  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH client type: external
	I1002 11:54:58.720273  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa (-rw-------)
	I1002 11:54:58.720309  384344 main.go:141] libmachine: (no-preload-304121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:58.720329  384344 main.go:141] libmachine: (no-preload-304121) DBG | About to run SSH command:
	I1002 11:54:58.720355  384344 main.go:141] libmachine: (no-preload-304121) DBG | exit 0
	I1002 11:54:58.866583  384344 main.go:141] libmachine: (no-preload-304121) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:58.866916  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetConfigRaw
	I1002 11:54:58.867637  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:58.870844  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871270  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.871305  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871677  384344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:54:58.871886  384344 machine.go:88] provisioning docker machine ...
	I1002 11:54:58.871906  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:58.872159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872343  384344 buildroot.go:166] provisioning hostname "no-preload-304121"
	I1002 11:54:58.872370  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:58.875795  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876215  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.876252  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876420  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:58.876592  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876766  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876935  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:58.877113  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:58.877512  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:58.877528  384344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-304121 && echo "no-preload-304121" | sudo tee /etc/hostname
	I1002 11:54:59.032306  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-304121
	
	I1002 11:54:59.032336  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.035842  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036373  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.036412  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036749  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.036953  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037145  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037313  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.037564  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.038035  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.038064  384344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-304121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-304121/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-304121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:59.175880  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:59.175910  384344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:59.175933  384344 buildroot.go:174] setting up certificates
	I1002 11:54:59.175945  384344 provision.go:83] configureAuth start
	I1002 11:54:59.175957  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:59.176253  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:59.179169  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179541  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.179577  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179797  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.182011  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182418  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.182451  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182653  384344 provision.go:138] copyHostCerts
	I1002 11:54:59.182718  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:59.182732  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:59.182807  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:59.182919  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:59.182931  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:59.182963  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:59.183050  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:59.183060  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:59.183088  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:59.183174  384344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.no-preload-304121 san=[192.168.39.143 192.168.39.143 localhost 127.0.0.1 minikube no-preload-304121]
	I1002 11:54:59.492171  384344 provision.go:172] copyRemoteCerts
	I1002 11:54:59.492239  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:59.492266  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.495249  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495698  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.495746  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495900  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.496143  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.496299  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.496460  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:54:59.594538  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 11:54:59.625319  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:54:59.652745  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:59.676895  384344 provision.go:86] duration metric: configureAuth took 500.931279ms
	I1002 11:54:59.676930  384344 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:59.677160  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:59.677259  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.680393  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.680730  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.680764  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.681190  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.681491  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681698  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681875  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.682112  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.682651  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.682684  384344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:55:00.029184  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:55:00.029213  384344 machine.go:91] provisioned docker machine in 1.157312136s
	I1002 11:55:00.029226  384344 start.go:300] post-start starting for "no-preload-304121" (driver="kvm2")
	I1002 11:55:00.029240  384344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:55:00.029296  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.029683  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:55:00.029722  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.032977  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033456  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.033488  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033677  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.033919  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.034136  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.034351  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.137946  384344 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:55:00.144169  384344 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:55:00.144209  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:55:00.144291  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:55:00.144405  384344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:55:00.144609  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:55:00.157898  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:00.186547  384344 start.go:303] post-start completed in 157.300734ms
	I1002 11:55:00.186580  384344 fix.go:56] fixHost completed within 20.691216247s
	I1002 11:55:00.186609  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.189905  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190374  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.190411  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190718  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.190940  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.191494  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:55:00.191981  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:55:00.191996  384344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:55:00.328123  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247700.270150690
	
	I1002 11:55:00.328155  384344 fix.go:206] guest clock: 1696247700.270150690
	I1002 11:55:00.328166  384344 fix.go:219] Guest: 2023-10-02 11:55:00.27015069 +0000 UTC Remote: 2023-10-02 11:55:00.186584697 +0000 UTC m=+358.877281851 (delta=83.565993ms)
	I1002 11:55:00.328193  384344 fix.go:190] guest clock delta is within tolerance: 83.565993ms
	I1002 11:55:00.328207  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 20.832874678s
	I1002 11:55:00.328234  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.328584  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:00.331898  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332432  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.332468  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332651  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333263  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333480  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333586  384344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:55:00.333647  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.333895  384344 ssh_runner.go:195] Run: cat /version.json
	I1002 11:55:00.333943  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.336673  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.336920  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337021  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337083  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337207  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337399  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.337487  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337518  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.337642  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337734  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.337835  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.338131  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.338307  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.427708  384344 ssh_runner.go:195] Run: systemctl --version
	I1002 11:55:00.456367  384344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:55:00.604389  384344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:55:00.612859  384344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:55:00.612968  384344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:55:00.627986  384344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:55:00.628056  384344 start.go:469] detecting cgroup driver to use...
	I1002 11:55:00.628128  384344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:55:00.643670  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:55:00.656987  384344 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:55:00.657058  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:55:00.669708  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:55:00.682586  384344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:55:00.790044  384344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:55:00.913634  384344 docker.go:213] disabling docker service ...
	I1002 11:55:00.913717  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:55:00.926496  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:55:00.938769  384344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:55:01.045413  384344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:55:01.169133  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:55:01.182168  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:55:01.201850  384344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:55:01.201926  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.214874  384344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:55:01.214972  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.225123  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.237560  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.247898  384344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:55:01.260797  384344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:55:01.271528  384344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:55:01.271602  384344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:55:01.285906  384344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:55:01.297623  384344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:55:01.429828  384344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:55:01.617340  384344 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:55:01.617486  384344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:55:01.622871  384344 start.go:537] Will wait 60s for crictl version
	I1002 11:55:01.622942  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:01.627257  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:55:01.674032  384344 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:55:01.674130  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.726822  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.777433  384344 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:59.549254  384787 pod_ready.go:102] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:01.550493  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:01.550524  384787 pod_ready.go:81] duration metric: took 4.495364436s waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:01.550537  384787 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:59.310529  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:59.811582  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.310859  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.810518  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.311217  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.336761  384965 api_server.go:72] duration metric: took 2.55808678s to wait for apiserver process to appear ...
	I1002 11:55:01.336793  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:01.336814  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:01.778891  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:01.781741  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782048  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:01.782088  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782334  384344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:55:01.787047  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:01.803390  384344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:55:01.803482  384344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:55:01.853839  384344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:55:01.853868  384344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:55:01.853954  384344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.853966  384344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.854164  384344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.854189  384344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.854254  384344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.854169  384344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:01.854325  384344 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 11:55:01.854171  384344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855315  384344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855339  384344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.855355  384344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.855841  384344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.855856  384344 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.855815  384344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001299  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.002150  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1002 11:55:02.004275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.007591  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.028882  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.199630  384344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1002 11:55:02.199751  384344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.199678  384344 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1002 11:55:02.199838  384344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.199866  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199890  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199707  384344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1002 11:55:02.199951  384344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.199981  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305560  384344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1002 11:55:02.305618  384344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.305670  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305721  384344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1002 11:55:02.305784  384344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.305826  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305853  384344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1002 11:55:02.305893  384344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.305934  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305943  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.305999  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.306035  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.403560  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.403701  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1002 11:55:02.403791  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.403861  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.403983  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 11:55:02.404056  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:02.404148  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 11:55:02.404200  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:02.404274  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.512787  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 11:55:02.512909  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:02.513038  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1002 11:55:02.513062  384344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513091  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513169  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.2 (exists)
	I1002 11:55:02.513217  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 11:55:02.513258  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:02.513292  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1002 11:55:02.513343  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 11:55:02.513399  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:02.519549  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.2 (exists)
	I1002 11:55:02.529685  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.2 (exists)
	I1002 11:55:02.739233  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:03.573767  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:05.577137  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:07.577690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:06.191660  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.191697  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.191711  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.268234  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.268270  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.769081  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.775235  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:06.775267  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.268848  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.289255  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:07.289294  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.769010  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.776315  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:55:07.785543  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:07.785578  384965 api_server.go:131] duration metric: took 6.448776132s to wait for apiserver health ...
	I1002 11:55:07.785620  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:55:07.785630  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:07.963339  384965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:07.965036  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:08.003261  384965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:08.072023  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:08.084616  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:08.084657  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:08.084670  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:08.084680  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:08.084693  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:08.084709  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:08.084723  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:08.084737  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:08.084752  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:08.084767  384965 system_pods.go:74] duration metric: took 12.715919ms to wait for pod list to return data ...
	I1002 11:55:08.084783  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:08.089289  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:08.089323  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:08.089337  384965 node_conditions.go:105] duration metric: took 4.548285ms to run NodePressure ...
	I1002 11:55:08.089359  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:08.496528  384965 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509299  384965 kubeadm.go:787] kubelet initialised
	I1002 11:55:08.509331  384965 kubeadm.go:788] duration metric: took 12.771905ms waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509343  384965 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:08.516124  384965 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.528838  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.528938  384965 pod_ready.go:81] duration metric: took 12.780895ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.528967  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.529001  384965 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.534830  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534867  384965 pod_ready.go:81] duration metric: took 5.838075ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.534882  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534892  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.549854  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549885  384965 pod_ready.go:81] duration metric: took 14.983531ms waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.549900  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549913  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.559230  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559313  384965 pod_ready.go:81] duration metric: took 9.38728ms waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.559335  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559347  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.900163  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900190  384965 pod_ready.go:81] duration metric: took 340.83496ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.900199  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900208  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.516054  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516096  384965 pod_ready.go:81] duration metric: took 615.877294ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.516112  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516121  384965 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.701735  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701764  384965 pod_ready.go:81] duration metric: took 185.632721ms waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.701775  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701782  384965 pod_ready.go:38] duration metric: took 1.192428133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:09.701800  384965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:55:09.715441  384965 ops.go:34] apiserver oom_adj: -16
	I1002 11:55:09.715471  384965 kubeadm.go:640] restartCluster took 22.236518554s
	I1002 11:55:09.715483  384965 kubeadm.go:406] StartCluster complete in 22.288924118s
	I1002 11:55:09.715506  384965 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.715603  384965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:55:09.717604  384965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.832925  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:55:09.832958  384965 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:55:09.833045  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:55:09.833070  384965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833078  384965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833081  384965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833097  384965 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.833106  384965 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:55:09.833106  384965 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:09.833108  384965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-777999"
	W1002 11:55:09.833125  384965 addons.go:240] addon metrics-server should already be in state true
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833570  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833592  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833615  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833624  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833634  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833646  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.839134  384965 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-777999" context rescaled to 1 replicas
	I1002 11:55:09.839204  384965 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:55:09.882782  384965 out.go:177] * Verifying Kubernetes components...
	I1002 11:55:09.852478  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1002 11:55:09.853164  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1002 11:55:09.853212  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1002 11:55:09.884413  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:55:09.884847  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884862  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884978  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.885450  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885473  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885590  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885616  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885875  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885905  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.885931  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885991  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886291  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.886608  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886609  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886643  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.886650  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.890816  384965 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.890840  384965 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:55:09.890874  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.891346  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.891381  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.905399  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1002 11:55:09.905472  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1002 11:55:09.905949  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906013  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906516  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906548  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.906616  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906638  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.907044  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907050  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907204  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907296  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907802  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1002 11:55:09.908797  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.909184  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.911200  384965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:55:09.909554  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.909557  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.913028  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.913040  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:55:09.913097  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:55:09.913128  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.914961  384965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102329  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.589219551s)
	I1002 11:55:10.102369  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1002 11:55:10.102405  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102437  384344 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (7.58915139s)
	I1002 11:55:10.102467  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.2 (exists)
	I1002 11:55:10.102468  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102517  384344 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (7.363200276s)
	I1002 11:55:10.102554  384344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 11:55:10.102587  384344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102639  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:10.107376  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:09.913417  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.916644  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.916734  384965 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:09.916751  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:55:09.916773  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.917177  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.917217  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.917938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.917968  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.918238  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.918494  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.918725  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.919087  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.920001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920470  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.920499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920702  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.920898  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.921037  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.921164  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.936676  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1002 11:55:09.937243  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.937814  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.937838  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.938269  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.938503  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.940662  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.940930  384965 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:09.940952  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:55:09.940975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.944168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.944929  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.944938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.944972  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.945129  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.945323  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.945464  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:10.048027  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:10.064428  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:55:10.064457  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:55:10.113892  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:55:10.113922  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:55:10.162803  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:10.203352  384965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:10.203377  384965 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:55:10.209916  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:10.209945  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:55:10.283168  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:11.838556  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.790470973s)
	I1002 11:55:11.838584  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.675739061s)
	I1002 11:55:11.838618  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838620  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838659  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838635  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838886  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555664753s)
	I1002 11:55:11.838941  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838954  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.838980  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.838992  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838961  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839104  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839139  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839157  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839170  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839303  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839369  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.839409  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839421  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839431  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839688  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839700  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839710  384965 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:11.841889  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.841915  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.842201  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842253  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.842259  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842269  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.849511  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.849529  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.849874  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.849878  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.849901  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.853656  384965 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1002 11:55:10.075236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:12.576161  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:11.855303  384965 addons.go:502] enable addons completed in 2.022363817s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1002 11:55:12.217572  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:12.931492  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2: (2.828987001s)
	I1002 11:55:12.931534  384344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.824127868s)
	I1002 11:55:12.931594  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:55:12.931539  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 from cache
	I1002 11:55:12.931660  384344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931718  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931728  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:12.939018  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1002 11:55:14.293770  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362024408s)
	I1002 11:55:14.293812  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1002 11:55:14.293844  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:14.293919  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:15.843943  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2: (1.549996136s)
	I1002 11:55:15.843970  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 from cache
	I1002 11:55:15.843995  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.844044  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.077109  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:17.575669  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:14.219000  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:16.717611  384965 node_ready.go:49] node "default-k8s-diff-port-777999" has status "Ready":"True"
	I1002 11:55:16.717639  384965 node_ready.go:38] duration metric: took 6.514250616s waiting for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:16.717652  384965 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:16.724331  384965 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242058  384965 pod_ready.go:92] pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.242084  384965 pod_ready.go:81] duration metric: took 517.728305ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242093  384965 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247916  384965 pod_ready.go:92] pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.247946  384965 pod_ready.go:81] duration metric: took 5.844733ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247960  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.596133  384505 kubeadm.go:787] kubelet initialised
	I1002 11:55:18.596163  384505 kubeadm.go:788] duration metric: took 48.489169583s waiting for restarted kubelet to initialise ...
	I1002 11:55:18.596173  384505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:18.603606  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612080  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.612112  384505 pod_ready.go:81] duration metric: took 8.472159ms waiting for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612124  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618116  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.618147  384505 pod_ready.go:81] duration metric: took 6.014635ms waiting for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618159  384505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624120  384505 pod_ready.go:92] pod "etcd-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.624148  384505 pod_ready.go:81] duration metric: took 5.979959ms waiting for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624162  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631373  384505 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.631404  384505 pod_ready.go:81] duration metric: took 7.233318ms waiting for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631418  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990560  384505 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.990593  384505 pod_ready.go:81] duration metric: took 359.165649ms waiting for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990608  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.708531  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2: (1.864455947s)
	I1002 11:55:17.708567  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 from cache
	I1002 11:55:17.708616  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:17.708669  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:20.492385  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2: (2.783683562s)
	I1002 11:55:20.492427  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 from cache
	I1002 11:55:20.492455  384344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:20.492508  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:19.575875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:22.075666  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.526494  384965 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.526525  384965 pod_ready.go:81] duration metric: took 2.278556042s waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.526542  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927586  384965 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:20.927626  384965 pod_ready.go:81] duration metric: took 1.401074339s waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927641  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117907  384965 pod_ready.go:92] pod "kube-proxy-gchnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.117943  384965 pod_ready.go:81] duration metric: took 190.292051ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117957  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517768  384965 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.517788  384965 pod_ready.go:81] duration metric: took 399.822591ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517800  384965 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:23.829704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.390560  384505 pod_ready.go:92] pod "kube-proxy-gkhxb" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.390588  384505 pod_ready.go:81] duration metric: took 399.970888ms waiting for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.390602  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791405  384505 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.791443  384505 pod_ready.go:81] duration metric: took 400.826662ms waiting for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791458  384505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:22.098383  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:24.098434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:21.439323  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 11:55:21.439378  384344 cache_images.go:123] Successfully loaded all cached images
	I1002 11:55:21.439386  384344 cache_images.go:92] LoadImages completed in 19.585504619s
	I1002 11:55:21.439504  384344 ssh_runner.go:195] Run: crio config
	I1002 11:55:21.510657  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:21.510683  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:21.510703  384344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:55:21.510734  384344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-304121 NodeName:no-preload-304121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:55:21.511445  384344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-304121"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:55:21.511576  384344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-304121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:55:21.511643  384344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:55:21.522719  384344 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:55:21.522788  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:55:21.531557  384344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 11:55:21.548551  384344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:55:21.565791  384344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1002 11:55:21.583240  384344 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1002 11:55:21.587268  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:21.600487  384344 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121 for IP: 192.168.39.143
	I1002 11:55:21.600520  384344 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:21.600663  384344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:55:21.600697  384344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:55:21.600794  384344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/client.key
	I1002 11:55:21.600873  384344 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key.62e94479
	I1002 11:55:21.600926  384344 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key
	I1002 11:55:21.601033  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:55:21.601061  384344 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:55:21.601071  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:55:21.601093  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:55:21.601118  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:55:21.601146  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:55:21.601182  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:21.601818  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:55:21.626860  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:55:21.650402  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:55:21.678876  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:55:21.704351  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:55:21.729385  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:55:21.755185  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:55:21.779149  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:55:21.802775  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:55:21.825691  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:55:21.849575  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:55:21.872777  384344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:55:21.890629  384344 ssh_runner.go:195] Run: openssl version
	I1002 11:55:21.896382  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:55:21.906415  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911134  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911202  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.916782  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:55:21.926770  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:55:21.936394  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940874  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940944  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.946542  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:55:21.956590  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:55:21.966128  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971092  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971144  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.976625  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:55:21.987142  384344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:55:21.991548  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:55:21.998311  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:55:22.004302  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:55:22.010267  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:55:22.016280  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:55:22.022273  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:55:22.027921  384344 kubeadm.go:404] StartCluster: {Name:no-preload-304121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:55:22.028050  384344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:55:22.028141  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:22.068066  384344 cri.go:89] found id: ""
	I1002 11:55:22.068147  384344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:55:22.079381  384344 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:55:22.079406  384344 kubeadm.go:636] restartCluster start
	I1002 11:55:22.079471  384344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:55:22.088977  384344 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.090087  384344 kubeconfig.go:92] found "no-preload-304121" server: "https://192.168.39.143:8443"
	I1002 11:55:22.093401  384344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:55:22.103315  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.103378  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.114520  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.114538  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.114586  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.126040  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.626326  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.626438  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.637215  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.126863  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.126967  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.138035  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.626453  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.639113  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.126445  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.126541  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.139561  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.626423  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.626534  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.638442  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.127011  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.127103  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.139945  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.626451  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.638919  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:26.126459  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.126551  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.140068  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.574146  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.574656  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.329321  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.329400  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.098690  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.098837  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.626344  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.626445  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.641274  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.126886  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.126965  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.139451  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.627110  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.627264  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.640675  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.126212  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.126301  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.140048  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.626433  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.626530  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.639683  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.127030  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.127142  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.139681  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.626803  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.626878  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.639468  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.127126  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.127231  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.140930  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.626441  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.626535  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.639070  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:31.126421  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.126503  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.138724  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.074607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.830079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.832350  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.099074  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.596870  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.627189  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.627281  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.640362  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:32.104121  384344 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:55:32.104153  384344 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:55:32.104169  384344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:55:32.104223  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:32.147672  384344 cri.go:89] found id: ""
	I1002 11:55:32.147756  384344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:55:32.164049  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:55:32.174941  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:55:32.175041  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185756  384344 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185783  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:32.328093  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.120678  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.341378  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.433591  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.518381  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:55:33.518458  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:33.530334  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.043021  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.542602  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.042825  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.542484  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.042547  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.067551  384344 api_server.go:72] duration metric: took 2.549193903s to wait for apiserver process to appear ...
	I1002 11:55:36.067574  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:36.067593  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:33.076598  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.077561  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.575927  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.328950  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.330925  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:34.598649  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:36.598851  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.099902  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:40.195285  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.195318  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.195330  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.261287  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.261324  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.762016  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.776249  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:40.776279  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.262027  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.277940  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:41.277971  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.762404  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.767751  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 11:55:41.775963  384344 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:41.775988  384344 api_server.go:131] duration metric: took 5.708406738s to wait for apiserver health ...
	I1002 11:55:41.775997  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:41.776003  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:41.777791  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:40.076215  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.574607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.831982  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.330541  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.599812  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.097139  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.779495  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:41.796340  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:41.838383  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:41.863561  384344 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:41.863600  384344 system_pods.go:61] "coredns-5dd5756b68-hn8bw" [f388b655-7f90-436d-a1fd-458f22c7f5e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:41.863612  384344 system_pods.go:61] "etcd-no-preload-304121" [b45507da-d57a-45f5-82a3-37b273c42747] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:41.863621  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [7f8cdde0-5050-4cea-87c5-56bd0a5d623b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:41.863630  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [24d40a92-d549-48c8-bf5f-983fdc15dcae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:41.863641  384344 system_pods.go:61] "kube-proxy-cwvr7" [9e3f08e6-92ad-4ebc-afe3-44d5ab81a63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:41.863651  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [cc3c6828-f829-416a-9cfd-ddcc0f485578] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:41.863665  384344 system_pods.go:61] "metrics-server-57f55c9bc5-lrqt9" [7b70c72d-06b3-40ae-8e0c-ea4794cfe47b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:41.863682  384344 system_pods.go:61] "storage-provisioner" [457608a4-5ba9-45d2-841e-889930ce6bd7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:41.863694  384344 system_pods.go:74] duration metric: took 25.279676ms to wait for pod list to return data ...
	I1002 11:55:41.863707  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:41.870534  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:41.870580  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:41.870636  384344 node_conditions.go:105] duration metric: took 6.921999ms to run NodePressure ...
	I1002 11:55:41.870666  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:42.164858  384344 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169831  384344 kubeadm.go:787] kubelet initialised
	I1002 11:55:42.169855  384344 kubeadm.go:788] duration metric: took 4.969744ms waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169864  384344 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:42.176338  384344 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.195428  384344 pod_ready.go:102] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.195763  384344 pod_ready.go:92] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:46.195786  384344 pod_ready.go:81] duration metric: took 4.019424872s waiting for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:46.195795  384344 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.581249  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:47.074875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.331120  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.833248  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.099661  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.599051  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.217529  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:50.218641  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.575639  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.074550  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.329627  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.330613  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.330666  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.098233  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.098464  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.717990  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.716716  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:53.716751  384344 pod_ready.go:81] duration metric: took 7.520948071s waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:53.716769  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738808  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.738832  384344 pod_ready.go:81] duration metric: took 1.022054915s waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738841  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.743979  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.743997  384344 pod_ready.go:81] duration metric: took 5.14952ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.744006  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749813  384344 pod_ready.go:92] pod "kube-proxy-cwvr7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.749843  384344 pod_ready.go:81] duration metric: took 5.828956ms waiting for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749855  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913811  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.913840  384344 pod_ready.go:81] duration metric: took 163.97545ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913853  384344 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.075263  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:56.574518  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.829643  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:58.328816  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.597512  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.598176  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.221008  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.221092  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.221270  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.075344  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.576898  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:00.330184  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.332041  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.599606  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.098251  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.098441  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.222251  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:05.721050  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.577043  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.075021  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.829434  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.830586  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.830689  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.100229  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.597399  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:07.725911  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.222275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.574907  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:11.075011  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.831040  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.330226  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.599336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.601338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.721538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:14.732864  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.075225  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.575267  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.831410  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.328821  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.098085  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.598406  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.220843  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:19.221812  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.074885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.575220  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.830090  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.329239  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.108397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:22.597329  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:21.723316  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.220817  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:26.222858  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.075276  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.574332  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.574872  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.330095  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.831991  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.598737  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.098098  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:28.721424  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.721466  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.074535  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.075748  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.330155  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.830009  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:29.597397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:31.598389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.598490  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.223521  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.719548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:34.575020  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.074654  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.331567  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.832286  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.598829  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.599403  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.722451  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.223547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:39.075433  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:41.575885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.329838  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.330038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.099862  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.598269  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.723887  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.221944  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.075128  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.075540  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.331960  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.829987  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.097469  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.098616  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.222108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.721938  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:48.589935  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.074993  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.331749  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.830280  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.830731  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.598433  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.097486  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.098228  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.222646  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.726547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.076322  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:55.575236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.329005  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.330077  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.598418  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.098019  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:57.221753  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.721824  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.074481  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.576860  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.831342  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.328695  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:01.598124  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.098241  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:02.221634  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.222422  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.075152  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.076964  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.577621  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.328811  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.329223  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.598041  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.097384  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.724181  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.221108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.223407  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:10.077910  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:12.574292  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.331559  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.828655  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.829065  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.098632  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.099363  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.721785  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.222201  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:14.574467  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.576124  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.829618  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:17.830298  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.598739  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.097854  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.722947  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.220868  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:19.074608  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.079563  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.329680  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.335299  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.109847  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.598994  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.221458  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.222249  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.575662  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.075111  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:24.829500  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.830678  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.099426  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.598577  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.721159  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.725949  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:28.574416  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.576031  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.330079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:31.330829  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.829243  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.098615  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.598161  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.220933  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.720190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.075330  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.075824  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.574487  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.829585  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:38.333997  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.598838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.098682  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:36.723779  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.222751  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.074293  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:42.574665  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.829324  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.329265  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.598047  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.598338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:44.097421  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.720538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.721398  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.220972  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.074832  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.573962  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.330175  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.829115  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.097496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.098108  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.221977  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.222810  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.576755  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.076442  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.829764  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.330051  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.099771  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.599534  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.223223  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.721544  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.574341  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.574466  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.829215  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.829468  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.829730  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:55.097141  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.598230  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.221854  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.721190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.830156  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.329206  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.599838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:02.097630  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.099434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:01.724512  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.223282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.076896  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.576101  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.330313  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:07.830038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.597389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.098677  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.721370  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.723225  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.224608  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.076078  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:10.574982  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.575115  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.832412  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.330220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.597760  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.598933  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.726487  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.220404  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.575310  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.576156  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.330536  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.829762  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.833076  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.099600  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.599713  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.222118  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:20.722548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:19.076690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.575073  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.330604  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.829742  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.099777  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.598614  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.220183  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.221895  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.575355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.575510  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.830538  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.329783  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:26.097290  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.097568  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:27.722661  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.221305  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.074457  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.074944  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.075905  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.831228  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:33.328903  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.098502  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.599120  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.221445  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.224133  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.075953  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.574997  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.330632  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.830117  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.101830  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.597886  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.722453  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:38.722619  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.725507  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.077321  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.574812  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.329004  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:42.329704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.598243  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.600336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.098496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.225247  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:45.721116  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.073774  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.830119  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.330229  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.101053  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.597255  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.724301  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.220275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.074634  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.075498  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.576147  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:49.829149  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.328994  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.598113  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:53.096876  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.224282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.721074  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.576355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.074445  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.330474  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.331220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.829693  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:55.098655  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.598659  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.721698  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.721958  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.222685  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:59.074760  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.076178  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.551409  384787 pod_ready.go:81] duration metric: took 4m0.000833874s waiting for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:01.551453  384787 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:01.551481  384787 pod_ready.go:38] duration metric: took 4m12.797362192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:01.551549  384787 kubeadm.go:640] restartCluster took 4m35.116019688s
	W1002 11:59:01.551687  384787 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:01.551757  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:00.830381  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.830963  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:00.103080  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.600662  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:03.720777  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.722315  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.330034  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.835944  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.098121  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.098246  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:09.099171  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.725245  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.221073  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.328885  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:12.331198  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:11.599122  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.099609  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.268063  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.716271748s)
	I1002 11:59:15.268160  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:15.282632  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:15.294231  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:15.305847  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:15.305892  384787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:59:15.365627  384787 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:59:15.365703  384787 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:15.546049  384787 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:15.546175  384787 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:15.546300  384787 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:15.810889  384787 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:12.221147  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.222293  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.223901  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.813908  384787 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:15.814079  384787 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:15.814178  384787 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:15.814257  384787 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:15.814309  384787 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:15.814451  384787 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:15.814528  384787 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:15.814874  384787 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:15.815489  384787 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:15.816067  384787 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:15.816586  384787 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:15.817099  384787 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:15.817161  384787 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:15.988485  384787 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:16.038665  384787 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:16.218038  384787 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:16.415133  384787 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:16.415531  384787 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:16.418000  384787 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:16.420952  384787 out.go:204]   - Booting up control plane ...
	I1002 11:59:16.421147  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:16.421273  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:16.423255  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:16.442699  384787 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:16.443964  384787 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:16.444055  384787 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:59:16.602169  384787 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:14.331978  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.830188  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.831449  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.597731  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.598683  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.722865  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.222671  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.329396  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.518315  384965 pod_ready.go:81] duration metric: took 4m0.000482629s waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:21.518363  384965 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:21.518378  384965 pod_ready.go:38] duration metric: took 4m4.800712941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:21.518406  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:21.518451  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:21.518519  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:21.587182  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:21.587210  384965 cri.go:89] found id: ""
	I1002 11:59:21.587221  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:21.587285  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.592996  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:21.593072  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:21.635267  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:21.635293  384965 cri.go:89] found id: ""
	I1002 11:59:21.635306  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:21.635367  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.640347  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:21.640428  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:21.686113  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:21.686146  384965 cri.go:89] found id: ""
	I1002 11:59:21.686157  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:21.686224  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.691867  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:21.691959  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:21.745210  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:21.745245  384965 cri.go:89] found id: ""
	I1002 11:59:21.745257  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:21.745330  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.750774  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:21.750862  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:21.810054  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:21.810084  384965 cri.go:89] found id: ""
	I1002 11:59:21.810099  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:21.810161  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.815433  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:21.815518  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:21.858759  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:21.858794  384965 cri.go:89] found id: ""
	I1002 11:59:21.858807  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:21.858887  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.864818  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:21.864900  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:21.920312  384965 cri.go:89] found id: ""
	I1002 11:59:21.920343  384965 logs.go:284] 0 containers: []
	W1002 11:59:21.920353  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:21.920362  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:21.920429  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:21.964677  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:21.964708  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:21.964715  384965 cri.go:89] found id: ""
	I1002 11:59:21.964724  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:21.964812  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.970514  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.976118  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:21.976158  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:22.026289  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:22.026337  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:22.094330  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:22.094389  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:22.133879  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:22.133911  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:22.186645  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:22.186688  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:22.200091  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:22.200132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:22.245383  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:22.245420  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:22.312167  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:22.312212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:22.358596  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:22.358631  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:22.417643  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:22.417695  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:22.467793  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:22.467830  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:22.509173  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:22.509216  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:23.037502  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:23.037554  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:19.792274  384505 pod_ready.go:81] duration metric: took 4m0.000796599s waiting for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:19.792309  384505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:19.792337  384505 pod_ready.go:38] duration metric: took 4m1.196150969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:19.792389  384505 kubeadm.go:640] restartCluster took 5m11.202020009s
	W1002 11:59:19.792478  384505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:19.792509  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:24.926525  384505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.133982838s)
	I1002 11:59:24.926616  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:24.943054  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:24.953201  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:24.963105  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:24.963158  384505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 11:59:25.027860  384505 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1002 11:59:25.027986  384505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:25.214224  384505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:25.214399  384505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:25.214529  384505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:25.472019  384505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:25.472706  384505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:25.481965  384505 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1002 11:59:25.630265  384505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:25.105120  384787 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502545 seconds
	I1002 11:59:25.105321  384787 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:25.124191  384787 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:25.659886  384787 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:25.660110  384787 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-487027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:59:26.180742  384787 kubeadm.go:322] [bootstrap-token] Using token: tg9u90.7q86afgrs7pieyop
	I1002 11:59:23.723485  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:25.724673  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:26.182574  384787 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:26.182738  384787 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:26.190559  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:59:26.200659  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:26.212391  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:26.217946  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:26.226534  384787 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:26.248000  384787 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:59:26.545226  384787 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:26.604475  384787 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:26.605636  384787 kubeadm.go:322] 
	I1002 11:59:26.605726  384787 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:26.605738  384787 kubeadm.go:322] 
	I1002 11:59:26.605810  384787 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:26.605815  384787 kubeadm.go:322] 
	I1002 11:59:26.605844  384787 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:26.605914  384787 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:26.605973  384787 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:26.605981  384787 kubeadm.go:322] 
	I1002 11:59:26.606052  384787 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:59:26.606058  384787 kubeadm.go:322] 
	I1002 11:59:26.606097  384787 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:59:26.606101  384787 kubeadm.go:322] 
	I1002 11:59:26.606143  384787 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:26.606203  384787 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:26.606263  384787 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:26.606267  384787 kubeadm.go:322] 
	I1002 11:59:26.606334  384787 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:59:26.606438  384787 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:26.606446  384787 kubeadm.go:322] 
	I1002 11:59:26.606580  384787 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.606732  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:26.606764  384787 kubeadm.go:322] 	--control-plane 
	I1002 11:59:26.606773  384787 kubeadm.go:322] 
	I1002 11:59:26.606906  384787 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:26.606919  384787 kubeadm.go:322] 
	I1002 11:59:26.607066  384787 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.607192  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:26.608470  384787 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:26.608503  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:59:26.608547  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:26.610426  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:25.632074  384505 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:25.632197  384505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:25.632294  384505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:25.632398  384505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:25.632546  384505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:25.632693  384505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:25.633319  384505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:25.633417  384505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:25.633720  384505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:25.634302  384505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:25.635341  384505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:25.635391  384505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:25.635461  384505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:25.743684  384505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:25.940709  384505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:26.418951  384505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:26.676172  384505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:26.677698  384505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:26.612002  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:26.646809  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:26.709486  384787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:26.709648  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.709720  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=embed-certs-487027 minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.778472  384787 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:27.199359  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:27.351099  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:25.716079  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:25.739754  384965 api_server.go:72] duration metric: took 4m15.900505961s to wait for apiserver process to appear ...
	I1002 11:59:25.739785  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:25.739834  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:25.739904  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:25.788719  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:25.788747  384965 cri.go:89] found id: ""
	I1002 11:59:25.788758  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:25.788824  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.794426  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:25.794500  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:25.836689  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:25.836721  384965 cri.go:89] found id: ""
	I1002 11:59:25.836731  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:25.836808  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.841671  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:25.841744  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:25.883947  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:25.883976  384965 cri.go:89] found id: ""
	I1002 11:59:25.883986  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:25.884049  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.892631  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:25.892758  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:25.966469  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:25.966502  384965 cri.go:89] found id: ""
	I1002 11:59:25.966514  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:25.966575  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.971814  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:25.971890  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:26.020970  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.021002  384965 cri.go:89] found id: ""
	I1002 11:59:26.021013  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:26.021076  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.025582  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:26.025657  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:26.077339  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.077371  384965 cri.go:89] found id: ""
	I1002 11:59:26.077383  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:26.077448  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.082311  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:26.082396  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:26.126803  384965 cri.go:89] found id: ""
	I1002 11:59:26.126833  384965 logs.go:284] 0 containers: []
	W1002 11:59:26.126843  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:26.126851  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:26.126992  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:26.176829  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.176858  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.176866  384965 cri.go:89] found id: ""
	I1002 11:59:26.176876  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:26.176945  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.182892  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.189288  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:26.189316  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.257856  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:26.257910  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.297691  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:26.297747  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:26.351211  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:26.351254  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:26.425373  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:26.425416  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:26.568944  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:26.568985  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.627406  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:26.627449  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:26.641249  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:26.641281  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:26.696939  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:26.696974  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.744365  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:26.744406  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:27.279579  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:27.279639  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:27.366447  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:27.366508  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:27.436429  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:27.436476  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:26.679464  384505 out.go:204]   - Booting up control plane ...
	I1002 11:59:26.679594  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:26.688060  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:26.700892  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:26.702245  384505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:26.706277  384505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:28.222692  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:30.223561  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:27.973079  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.472938  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.973900  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.473650  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.972984  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.473216  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.973931  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.474026  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.973024  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:32.473723  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.989828  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:59:29.995664  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:59:29.998819  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:29.998846  384965 api_server.go:131] duration metric: took 4.25905343s to wait for apiserver health ...
	I1002 11:59:29.998855  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:29.998882  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:29.998944  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:30.037898  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.037925  384965 cri.go:89] found id: ""
	I1002 11:59:30.037935  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:30.038014  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.042751  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:30.042835  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:30.085339  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.085378  384965 cri.go:89] found id: ""
	I1002 11:59:30.085390  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:30.085463  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.090184  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:30.090265  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:30.130574  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.130602  384965 cri.go:89] found id: ""
	I1002 11:59:30.130611  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:30.130665  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.135040  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:30.135125  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:30.178044  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:30.178067  384965 cri.go:89] found id: ""
	I1002 11:59:30.178078  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:30.178144  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.182586  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:30.182662  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:30.226121  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:30.226142  384965 cri.go:89] found id: ""
	I1002 11:59:30.226152  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:30.226209  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.231080  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:30.231156  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:30.275499  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.275533  384965 cri.go:89] found id: ""
	I1002 11:59:30.275545  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:30.275611  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.281023  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:30.281089  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:30.325580  384965 cri.go:89] found id: ""
	I1002 11:59:30.325610  384965 logs.go:284] 0 containers: []
	W1002 11:59:30.325622  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:30.325630  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:30.325691  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:30.372727  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.372760  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.372766  384965 cri.go:89] found id: ""
	I1002 11:59:30.372776  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:30.372838  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.377541  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.382371  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:30.382403  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:30.449081  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:30.449132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.519339  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:30.519392  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.566205  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:30.566250  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.607933  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:30.607973  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:30.655904  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:30.655946  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.717563  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:30.717619  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.779216  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:30.779268  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.822075  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:30.822114  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:31.180609  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:31.180664  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:31.196239  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:31.196274  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:31.345274  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:31.345318  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:31.392175  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:31.392212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:33.946599  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:33.946635  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.946643  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.946650  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.946656  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.946659  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.946664  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.946677  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.946687  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.946704  384965 system_pods.go:74] duration metric: took 3.947840874s to wait for pod list to return data ...
	I1002 11:59:33.946715  384965 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:33.950028  384965 default_sa.go:45] found service account: "default"
	I1002 11:59:33.950059  384965 default_sa.go:55] duration metric: took 3.333093ms for default service account to be created ...
	I1002 11:59:33.950069  384965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:33.956623  384965 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:33.956651  384965 system_pods.go:89] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.956657  384965 system_pods.go:89] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.956662  384965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.956666  384965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.956670  384965 system_pods.go:89] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.956674  384965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.956681  384965 system_pods.go:89] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.956686  384965 system_pods.go:89] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.956694  384965 system_pods.go:126] duration metric: took 6.618721ms to wait for k8s-apps to be running ...
	I1002 11:59:33.956704  384965 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:33.956749  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:33.976674  384965 system_svc.go:56] duration metric: took 19.952308ms WaitForService to wait for kubelet.
	I1002 11:59:33.976710  384965 kubeadm.go:581] duration metric: took 4m24.137472355s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:33.976750  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:33.982173  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:33.982211  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:33.982227  384965 node_conditions.go:105] duration metric: took 5.470843ms to run NodePressure ...
	I1002 11:59:33.982242  384965 start.go:228] waiting for startup goroutines ...
	I1002 11:59:33.982251  384965 start.go:233] waiting for cluster config update ...
	I1002 11:59:33.982303  384965 start.go:242] writing updated cluster config ...
	I1002 11:59:33.982687  384965 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:34.039684  384965 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:34.041739  384965 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-777999" cluster and "default" namespace by default
	I1002 11:59:32.723475  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:35.221523  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:32.973400  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.473644  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.973820  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.473607  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.973848  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.473328  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.973485  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.473888  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.973837  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.473514  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.973633  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.094807  384787 kubeadm.go:1081] duration metric: took 11.38520709s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:38.094846  384787 kubeadm.go:406] StartCluster complete in 5m11.722637512s
	I1002 11:59:38.094872  384787 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.094972  384787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:38.097201  384787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.097495  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:38.097829  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:59:38.097966  384787 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:38.098056  384787 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-487027"
	I1002 11:59:38.098079  384787 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-487027"
	I1002 11:59:38.098083  384787 addons.go:69] Setting default-storageclass=true in profile "embed-certs-487027"
	I1002 11:59:38.098098  384787 addons.go:69] Setting metrics-server=true in profile "embed-certs-487027"
	I1002 11:59:38.098110  384787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-487027"
	I1002 11:59:38.098113  384787 addons.go:231] Setting addon metrics-server=true in "embed-certs-487027"
	W1002 11:59:38.098125  384787 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:38.098177  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098608  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098643  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098647  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1002 11:59:38.098092  384787 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:38.098827  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098670  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.099207  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.099235  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.118215  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I1002 11:59:38.118691  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.119232  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.119260  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.119649  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.120147  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.120182  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.129398  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1002 11:59:38.129652  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1002 11:59:38.130092  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.130723  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.130746  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.131301  384787 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-487027" context rescaled to 1 replicas
	I1002 11:59:38.131342  384787 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:38.133196  384787 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:38.134675  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:38.132825  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.134964  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.135242  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.135408  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.135434  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.135834  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.136413  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.136455  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.138974  384787 addons.go:231] Setting addon default-storageclass=true in "embed-certs-487027"
	W1002 11:59:38.138995  384787 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:38.139025  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.139434  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.139469  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.141226  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1002 11:59:38.141643  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.142086  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.142104  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.142433  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.142609  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.144425  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.146525  384787 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:38.148187  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:38.148204  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:38.148227  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.152187  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152549  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.152575  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152783  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.152988  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.153139  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.153280  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.157114  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 11:59:38.157655  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.158192  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.158211  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.158619  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.159253  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.159290  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.159506  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I1002 11:59:38.159895  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.160383  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.160395  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.160727  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.160902  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.162835  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.164490  384787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:37.211498  384505 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504818 seconds
	I1002 11:59:37.211660  384505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:37.229976  384505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:37.759297  384505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:37.759467  384505 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-749860 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:59:38.284135  384505 kubeadm.go:322] [bootstrap-token] Using token: rt49x4.7033jvaiaszsonci
	I1002 11:59:38.285950  384505 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:38.286108  384505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:38.299290  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:38.306326  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:38.312137  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:38.320028  384505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:38.439411  384505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:38.704007  384505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:38.705937  384505 kubeadm.go:322] 
	I1002 11:59:38.706075  384505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:38.706096  384505 kubeadm.go:322] 
	I1002 11:59:38.706210  384505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:38.706221  384505 kubeadm.go:322] 
	I1002 11:59:38.706256  384505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:38.706341  384505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:38.706433  384505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:38.706448  384505 kubeadm.go:322] 
	I1002 11:59:38.706527  384505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:38.706614  384505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:38.706701  384505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:38.706712  384505 kubeadm.go:322] 
	I1002 11:59:38.706805  384505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 11:59:38.706898  384505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:38.706910  384505 kubeadm.go:322] 
	I1002 11:59:38.707003  384505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707134  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:38.707169  384505 kubeadm.go:322]     --control-plane 	  
	I1002 11:59:38.707179  384505 kubeadm.go:322] 
	I1002 11:59:38.707272  384505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:38.707283  384505 kubeadm.go:322] 
	I1002 11:59:38.707373  384505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707500  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:38.708451  384505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:38.708478  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:59:38.708501  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:38.710166  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:38.711596  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:38.725385  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:38.748155  384505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:38.748294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.748295  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-749860 minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.795585  384505 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:39.068200  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.166036  384787 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.166047  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:38.166063  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.169435  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.169903  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.169929  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.170098  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.170273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.170517  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.170711  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.177450  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I1002 11:59:38.178044  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.178596  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.178616  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.179009  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.179244  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.181209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.181596  384787 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.181613  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:38.181641  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.185272  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.185785  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.185813  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.186245  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.186539  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.186748  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.186938  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.337092  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:38.337129  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:38.379388  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.389992  384787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.390060  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:38.399264  384787 node_ready.go:49] node "embed-certs-487027" has status "Ready":"True"
	I1002 11:59:38.399295  384787 node_ready.go:38] duration metric: took 9.264648ms waiting for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.399308  384787 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:38.401885  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:38.401909  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:38.406757  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.438158  384787 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.458749  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.458784  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:38.517143  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.547128  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.547161  384787 pod_ready.go:81] duration metric: took 108.899374ms waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.547176  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744560  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.744587  384787 pod_ready.go:81] duration metric: took 197.40322ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744598  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852242  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.852277  384787 pod_ready.go:81] duration metric: took 107.671499ms waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852294  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.017545  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638113738s)
	I1002 11:59:41.017602  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017613  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017597  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.627499125s)
	I1002 11:59:41.017658  384787 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:41.017718  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.610925223s)
	I1002 11:59:41.017747  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017759  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017907  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.017960  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.017977  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017994  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018535  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018549  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018559  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.018568  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018636  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018645  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018679  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019046  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019049  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.019064  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.027153  384787 pod_ready.go:102] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.049978  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.050007  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.050369  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.050391  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.100800  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.583606696s)
	I1002 11:59:41.100870  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.100900  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101237  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101258  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101268  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.101278  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101576  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.101621  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101634  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101647  384787 addons.go:467] Verifying addon metrics-server=true in "embed-certs-487027"
	I1002 11:59:41.103637  384787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:37.222165  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:39.223800  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.105142  384787 addons.go:502] enable addons completed in 3.007188775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:41.492039  384787 pod_ready.go:92] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.492067  384787 pod_ready.go:81] duration metric: took 2.639765498s waiting for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.492081  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500950  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.500979  384787 pod_ready.go:81] duration metric: took 8.889098ms waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500990  384787 pod_ready.go:38] duration metric: took 3.101668727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:41.501012  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:41.501079  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:41.533141  384787 api_server.go:72] duration metric: took 3.401757173s to wait for apiserver process to appear ...
	I1002 11:59:41.533167  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:41.533183  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:59:41.543027  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:59:41.545456  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:41.545483  384787 api_server.go:131] duration metric: took 12.308941ms to wait for apiserver health ...
	I1002 11:59:41.545494  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:41.556090  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:41.556183  384787 system_pods.go:61] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.556209  384787 system_pods.go:61] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.556247  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.556272  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.556290  384787 system_pods.go:61] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.556306  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.556329  384787 system_pods.go:61] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.556366  384787 system_pods.go:61] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.556392  384787 system_pods.go:74] duration metric: took 10.889958ms to wait for pod list to return data ...
	I1002 11:59:41.556412  384787 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:41.594659  384787 default_sa.go:45] found service account: "default"
	I1002 11:59:41.594690  384787 default_sa.go:55] duration metric: took 38.261546ms for default service account to be created ...
	I1002 11:59:41.594701  384787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:41.800342  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:41.800375  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.800382  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.800388  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.800393  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.800397  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.800401  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.800407  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.800412  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.800431  384787 retry.go:31] will retry after 300.830497ms: missing components: kube-dns
	I1002 11:59:42.116978  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.117028  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.117039  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.117048  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.117058  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.117064  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.117071  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.117080  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.117089  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.117109  384787 retry.go:31] will retry after 380.49084ms: missing components: kube-dns
	I1002 11:59:42.506867  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.506901  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.506908  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.506914  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.506919  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.506923  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.506927  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.506933  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.506939  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.506954  384787 retry.go:31] will retry after 409.062449ms: missing components: kube-dns
	I1002 11:59:42.924401  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.924443  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.924456  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.924464  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.924471  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.924477  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.924484  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.924493  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.924503  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.924524  384787 retry.go:31] will retry after 544.758887ms: missing components: kube-dns
	I1002 11:59:43.477592  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:43.477622  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Running
	I1002 11:59:43.477628  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:43.477632  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:43.477637  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:43.477640  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:43.477645  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:43.477651  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:43.477657  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Running
	I1002 11:59:43.477665  384787 system_pods.go:126] duration metric: took 1.882959518s to wait for k8s-apps to be running ...
	I1002 11:59:43.477672  384787 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:43.477714  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:43.492105  384787 system_svc.go:56] duration metric: took 14.416995ms WaitForService to wait for kubelet.
	I1002 11:59:43.492138  384787 kubeadm.go:581] duration metric: took 5.360761991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:43.492161  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:43.496739  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:43.496769  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:43.496785  384787 node_conditions.go:105] duration metric: took 4.61842ms to run NodePressure ...
	I1002 11:59:43.496801  384787 start.go:228] waiting for startup goroutines ...
	I1002 11:59:43.496810  384787 start.go:233] waiting for cluster config update ...
	I1002 11:59:43.496823  384787 start.go:242] writing updated cluster config ...
	I1002 11:59:43.497156  384787 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:43.568627  384787 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:43.570324  384787 out.go:177] * Done! kubectl is now configured to use "embed-certs-487027" cluster and "default" namespace by default
	I1002 11:59:39.194035  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:39.810338  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.310222  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.809912  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.310004  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.810506  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.309581  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.810312  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.310294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.809602  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.722699  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.221300  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.309927  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:44.810169  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.310095  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.809546  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.310144  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.809605  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.310487  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.809697  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.309464  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.809680  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.723036  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.220863  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:51.221417  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.310000  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:49.809922  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.310214  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.809728  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.309659  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.809723  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.309837  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.809788  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.309655  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.809468  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.310103  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.810421  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.968150  384505 kubeadm.go:1081] duration metric: took 16.219921091s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:54.968184  384505 kubeadm.go:406] StartCluster complete in 5m46.426951815s
	I1002 11:59:54.968203  384505 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.968302  384505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:54.970101  384505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.970429  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:54.970599  384505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:54.970672  384505 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970692  384505 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-749860"
	W1002 11:59:54.970703  384505 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:54.970723  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:59:54.970753  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.970775  384505 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970792  384505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-749860"
	I1002 11:59:54.971196  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971204  384505 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-749860"
	I1002 11:59:54.971226  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971199  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971240  384505 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-749860"
	W1002 11:59:54.971251  384505 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:54.971281  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971297  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.971669  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971707  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.989112  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 11:59:54.989701  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.989819  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1002 11:59:54.989971  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 11:59:54.990503  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990552  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990574  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.990592  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.990975  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991042  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991062  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991094  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991110  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991327  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:54.991555  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991596  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.992169  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992183  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992197  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.992206  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.998018  384505 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-749860"
	W1002 11:59:54.998043  384505 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:54.998067  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.998716  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.003322  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.020037  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1002 11:59:55.020659  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.021292  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.021313  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.021707  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.021896  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.022155  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1002 11:59:55.022286  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I1002 11:59:55.022697  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024740  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.024793  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024824  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.024839  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.027065  384505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:55.025237  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.025561  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.028415  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.028568  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:55.028579  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:55.028596  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.028867  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.029051  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.030397  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.030424  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.031461  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.033181  384505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:55.032032  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.032651  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.034670  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.034698  384505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.034703  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.034711  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:55.034727  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.034894  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.035089  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.035269  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.046534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046573  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.046599  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046629  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.046888  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.047102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.047276  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.051887  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1002 11:59:55.052372  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.052940  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.052970  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.053349  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.053558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.055503  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.055762  384505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.055780  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:55.055805  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.062494  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062526  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.062542  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062550  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.062752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.062922  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.063162  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.103907  384505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-749860" context rescaled to 1 replicas
	I1002 11:59:55.103958  384505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:55.105626  384505 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:53.722331  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:54.914848  384344 pod_ready.go:81] duration metric: took 4m0.000973055s waiting for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:54.914899  384344 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:54.914926  384344 pod_ready.go:38] duration metric: took 4m12.745047876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:54.914963  384344 kubeadm.go:640] restartCluster took 4m32.83554771s
	W1002 11:59:54.915062  384344 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:54.915098  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:55.106948  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:55.283274  384505 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.283336  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:55.291603  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:55.291629  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:55.297775  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.321901  384505 node_ready.go:49] node "old-k8s-version-749860" has status "Ready":"True"
	I1002 11:59:55.321927  384505 node_ready.go:38] duration metric: took 38.615436ms waiting for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.321939  384505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:55.327570  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.355612  384505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:55.357164  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:55.357187  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:55.423852  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:55.423883  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:55.477683  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:56.041846  384505 start.go:923] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:56.230394  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230432  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230466  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230488  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230810  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230869  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230888  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230913  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230936  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230890  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230969  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230990  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.231024  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.231326  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231341  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231652  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231667  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231740  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.327260  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.327289  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.327633  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.327654  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547462  384505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.069727635s)
	I1002 11:59:56.547536  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.547549  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.547901  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.547948  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.547974  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547993  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.548010  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.548288  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.548321  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.548322  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.548333  384505 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-749860"
	I1002 11:59:56.550084  384505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:56.551798  384505 addons.go:502] enable addons completed in 1.581195105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:57.554993  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:59.933613  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:01.937565  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:04.431925  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:05.433988  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.434013  384505 pod_ready.go:81] duration metric: took 10.078369703s waiting for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.434029  384505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441501  384505 pod_ready.go:92] pod "kube-proxy-mdtp5" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.441534  384505 pod_ready.go:81] duration metric: took 7.496823ms waiting for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441543  384505 pod_ready.go:38] duration metric: took 10.1195912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:05.441592  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:05.441680  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:05.460054  384505 api_server.go:72] duration metric: took 10.356049869s to wait for apiserver process to appear ...
	I1002 12:00:05.460080  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:05.460100  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 12:00:05.466796  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 12:00:05.467813  384505 api_server.go:141] control plane version: v1.16.0
	I1002 12:00:05.467845  384505 api_server.go:131] duration metric: took 7.75678ms to wait for apiserver health ...
	I1002 12:00:05.467855  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:05.472349  384505 system_pods.go:59] 4 kube-system pods found
	I1002 12:00:05.472384  384505 system_pods.go:61] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.472391  384505 system_pods.go:61] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.472401  384505 system_pods.go:61] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.472410  384505 system_pods.go:61] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.472433  384505 system_pods.go:74] duration metric: took 4.569442ms to wait for pod list to return data ...
	I1002 12:00:05.472446  384505 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:05.476327  384505 default_sa.go:45] found service account: "default"
	I1002 12:00:05.476349  384505 default_sa.go:55] duration metric: took 3.895344ms for default service account to be created ...
	I1002 12:00:05.476357  384505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:05.480522  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.480545  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.480550  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.480557  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.480563  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.480579  384505 retry.go:31] will retry after 270.891275ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:05.757515  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.757555  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.757563  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.757574  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.757585  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.757603  384505 retry.go:31] will retry after 336.725562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.099945  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.099978  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.099985  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.099995  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.100002  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.100024  384505 retry.go:31] will retry after 389.53153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.504317  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.504354  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.504362  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.504375  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.504385  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.504407  384505 retry.go:31] will retry after 453.465732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.962509  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.962534  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.962539  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.962546  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.962552  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.962568  384505 retry.go:31] will retry after 489.820063ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:07.457422  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:07.457451  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:07.457456  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:07.457465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:07.457472  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:07.457490  384505 retry.go:31] will retry after 931.079053ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:08.394500  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:08.394527  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:08.394532  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:08.394538  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:08.394546  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:08.394562  384505 retry.go:31] will retry after 929.512162ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:09.216426  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.301296702s)
	I1002 12:00:09.216493  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:09.230712  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:00:09.239588  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:00:09.248624  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:00:09.248677  384344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:00:09.466935  384344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:00:09.329677  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:09.329709  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:09.329714  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:09.329722  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:09.329728  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:09.329746  384505 retry.go:31] will retry after 898.08397ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:10.232119  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:10.232155  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:10.232163  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:10.232176  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:10.232185  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:10.232212  384505 retry.go:31] will retry after 1.809149678s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:12.047424  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:12.047452  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:12.047458  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:12.047465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:12.047471  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:12.047487  384505 retry.go:31] will retry after 2.054960799s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:14.109048  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:14.109080  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:14.109088  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:14.109098  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:14.109108  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:14.109128  384505 retry.go:31] will retry after 2.523219254s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:16.640373  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:16.640399  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:16.640405  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:16.640412  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:16.640419  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:16.640436  384505 retry.go:31] will retry after 2.61022195s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:19.606412  384344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:00:19.606505  384344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:00:19.606620  384344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:00:19.606760  384344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:00:19.606856  384344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:00:19.606912  384344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:00:19.608541  384344 out.go:204]   - Generating certificates and keys ...
	I1002 12:00:19.608638  384344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:00:19.608743  384344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:00:19.608891  384344 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 12:00:19.608999  384344 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 12:00:19.609113  384344 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 12:00:19.609193  384344 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 12:00:19.609276  384344 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 12:00:19.609360  384344 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 12:00:19.609453  384344 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 12:00:19.609548  384344 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 12:00:19.609624  384344 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 12:00:19.609694  384344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:00:19.609761  384344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:00:19.609833  384344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:00:19.609916  384344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:00:19.609991  384344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:00:19.610100  384344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:00:19.610182  384344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:00:19.611696  384344 out.go:204]   - Booting up control plane ...
	I1002 12:00:19.611810  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:00:19.611916  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:00:19.612021  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:00:19.612173  384344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:00:19.612294  384344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:00:19.612346  384344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:00:19.612576  384344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:00:19.612683  384344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 12:00:19.612825  384344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:00:19.612943  384344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:00:19.613026  384344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:00:19.613215  384344 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-304121 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:00:19.613266  384344 kubeadm.go:322] [bootstrap-token] Using token: pd40pp.2tkeaw4x1d1qfkq9
	I1002 12:00:19.614472  384344 out.go:204]   - Configuring RBAC rules ...
	I1002 12:00:19.614593  384344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:00:19.614706  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:00:19.614912  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:00:19.615054  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:00:19.615220  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:00:19.615315  384344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:00:19.615474  384344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:00:19.615540  384344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:00:19.615622  384344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:00:19.615633  384344 kubeadm.go:322] 
	I1002 12:00:19.615725  384344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:00:19.615747  384344 kubeadm.go:322] 
	I1002 12:00:19.615851  384344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:00:19.615864  384344 kubeadm.go:322] 
	I1002 12:00:19.615894  384344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:00:19.615997  384344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:00:19.616084  384344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:00:19.616094  384344 kubeadm.go:322] 
	I1002 12:00:19.616143  384344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:00:19.616152  384344 kubeadm.go:322] 
	I1002 12:00:19.616222  384344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:00:19.616240  384344 kubeadm.go:322] 
	I1002 12:00:19.616321  384344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:00:19.616420  384344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:00:19.616532  384344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:00:19.616548  384344 kubeadm.go:322] 
	I1002 12:00:19.616640  384344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:00:19.616734  384344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:00:19.616743  384344 kubeadm.go:322] 
	I1002 12:00:19.616857  384344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617005  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:00:19.617049  384344 kubeadm.go:322] 	--control-plane 
	I1002 12:00:19.617059  384344 kubeadm.go:322] 
	I1002 12:00:19.617136  384344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:00:19.617142  384344 kubeadm.go:322] 
	I1002 12:00:19.617238  384344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617333  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:00:19.617371  384344 cni.go:84] Creating CNI manager for ""
	I1002 12:00:19.617384  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:00:19.618962  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:00:19.620215  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:00:19.650698  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:00:19.699458  384344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:00:19.699594  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=no-preload-304121 minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.699598  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.810984  384344 ops.go:34] apiserver oom_adj: -16
	I1002 12:00:20.114460  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.245669  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.876563  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.256294  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:19.256319  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:19.256325  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:19.256332  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:19.256338  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:19.256355  384505 retry.go:31] will retry after 3.270215577s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:22.532684  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:22.532714  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:22.532723  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:22.532730  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:22.532737  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:22.532754  384505 retry.go:31] will retry after 5.273561216s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:21.376620  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:21.876453  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.376537  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.876967  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.377242  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.876469  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.376391  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.877422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.376422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.877251  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.810777  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:27.810810  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:27.810816  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:27.810822  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:27.810828  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:27.810845  384505 retry.go:31] will retry after 6.34425242s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:26.376388  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:26.877267  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.376480  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.877214  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.376560  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.876964  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.377314  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.877135  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.377301  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.876525  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.376660  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.876991  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.376934  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.584774  384344 kubeadm.go:1081] duration metric: took 12.88524826s to wait for elevateKubeSystemPrivileges.
	I1002 12:00:32.584821  384344 kubeadm.go:406] StartCluster complete in 5m10.55691254s
	I1002 12:00:32.584849  384344 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.584955  384344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:00:32.587722  384344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.588018  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:00:32.588146  384344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:00:32.588230  384344 addons.go:69] Setting default-storageclass=true in profile "no-preload-304121"
	I1002 12:00:32.588251  384344 addons.go:69] Setting metrics-server=true in profile "no-preload-304121"
	I1002 12:00:32.588265  384344 addons.go:231] Setting addon metrics-server=true in "no-preload-304121"
	W1002 12:00:32.588273  384344 addons.go:240] addon metrics-server should already be in state true
	I1002 12:00:32.588252  384344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-304121"
	I1002 12:00:32.588323  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:00:32.588333  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588229  384344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-304121"
	I1002 12:00:32.588387  384344 addons.go:231] Setting addon storage-provisioner=true in "no-preload-304121"
	W1002 12:00:32.588397  384344 addons.go:240] addon storage-provisioner should already be in state true
	I1002 12:00:32.588433  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588695  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588731  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588737  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588777  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588867  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588891  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.612093  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I1002 12:00:32.612118  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I1002 12:00:32.612252  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1002 12:00:32.612652  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612799  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612847  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.613307  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613337  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613432  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613504  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613715  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.613718  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613838  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613955  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.614146  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614197  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614802  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.614842  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.615497  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.615534  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.617844  384344 addons.go:231] Setting addon default-storageclass=true in "no-preload-304121"
	W1002 12:00:32.617884  384344 addons.go:240] addon default-storageclass should already be in state true
	I1002 12:00:32.617914  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.618326  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.618436  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.634123  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I1002 12:00:32.634849  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.634953  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 12:00:32.635328  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.635470  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635495  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635819  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635841  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635867  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636193  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636340  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.636373  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.636435  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.637717  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1002 12:00:32.638051  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.640160  384344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 12:00:32.642288  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 12:00:32.642300  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 12:00:32.642314  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.640240  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.642837  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.642863  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.643527  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.643695  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.645514  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.645565  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.648157  384344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:00:32.645977  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.646152  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.650297  384344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.650313  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:00:32.650328  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.650380  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.650547  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.650823  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.650961  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.653953  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654560  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.654592  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654886  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.655049  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.655195  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.655410  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.658005  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1002 12:00:32.658525  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.659046  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.659059  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.659478  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.659611  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.661708  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.661982  384344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:32.661998  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:00:32.662018  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.664637  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665005  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.665023  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665161  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.665335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.665426  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.665610  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.723429  384344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-304121" context rescaled to 1 replicas
	I1002 12:00:32.723469  384344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:00:32.725329  384344 out.go:177] * Verifying Kubernetes components...
	I1002 12:00:32.726924  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:32.860425  384344 node_ready.go:35] waiting up to 6m0s for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.860515  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:00:32.904658  384344 node_ready.go:49] node "no-preload-304121" has status "Ready":"True"
	I1002 12:00:32.904689  384344 node_ready.go:38] duration metric: took 44.230643ms waiting for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.904705  384344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:32.949887  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:32.984050  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.997841  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 12:00:32.997869  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 12:00:32.999235  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:33.082015  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 12:00:33.082051  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 12:00:33.326524  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:33.326554  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 12:00:33.403533  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:34.844716  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984135314s)
	I1002 12:00:34.844752  384344 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 12:00:35.114639  384344 pod_ready.go:102] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:35.538571  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.55447937s)
	I1002 12:00:35.538624  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538641  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.538652  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.539381648s)
	I1002 12:00:35.538700  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538713  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539005  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539027  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539039  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539049  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539137  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539162  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539176  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539194  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539203  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539299  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539328  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539341  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539537  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539588  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539622  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.596015  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.596048  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.596384  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.596431  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.596449  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.641915  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.238327482s)
	I1002 12:00:35.641985  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642007  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642363  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642389  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642399  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642409  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642423  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.642716  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642739  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642750  384344 addons.go:467] Verifying addon metrics-server=true in "no-preload-304121"
	I1002 12:00:35.644696  384344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 12:00:35.646046  384344 addons.go:502] enable addons completed in 3.05790546s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 12:00:36.113386  384344 pod_ready.go:92] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.113415  384344 pod_ready.go:81] duration metric: took 3.163496821s waiting for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.113429  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.116264  384344 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116290  384344 pod_ready.go:81] duration metric: took 2.85415ms waiting for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	E1002 12:00:36.116300  384344 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116306  384344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126555  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.126575  384344 pod_ready.go:81] duration metric: took 10.262082ms waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126583  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137876  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.137903  384344 pod_ready.go:81] duration metric: took 11.312511ms waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137916  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146526  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.146549  384344 pod_ready.go:81] duration metric: took 8.624341ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146561  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307205  384344 pod_ready.go:92] pod "kube-proxy-sprhm" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.307231  384344 pod_ready.go:81] duration metric: took 160.663088ms waiting for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307241  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707429  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.707455  384344 pod_ready.go:81] duration metric: took 400.207608ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707463  384344 pod_ready.go:38] duration metric: took 3.802745796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:36.707480  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:36.707537  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:36.733934  384344 api_server.go:72] duration metric: took 4.010431274s to wait for apiserver process to appear ...
	I1002 12:00:36.733962  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:36.733979  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 12:00:36.740562  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 12:00:36.742234  384344 api_server.go:141] control plane version: v1.28.2
	I1002 12:00:36.742259  384344 api_server.go:131] duration metric: took 8.289515ms to wait for apiserver health ...
	I1002 12:00:36.742270  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:36.910934  384344 system_pods.go:59] 8 kube-system pods found
	I1002 12:00:36.910962  384344 system_pods.go:61] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:36.910967  384344 system_pods.go:61] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:36.910971  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:36.910976  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:36.910980  384344 system_pods.go:61] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:36.910983  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:36.910991  384344 system_pods.go:61] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:36.911002  384344 system_pods.go:61] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 12:00:36.911013  384344 system_pods.go:74] duration metric: took 168.734676ms to wait for pod list to return data ...
	I1002 12:00:36.911027  384344 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:37.106994  384344 default_sa.go:45] found service account: "default"
	I1002 12:00:37.107038  384344 default_sa.go:55] duration metric: took 196.001935ms for default service account to be created ...
	I1002 12:00:37.107050  384344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:37.310973  384344 system_pods.go:86] 8 kube-system pods found
	I1002 12:00:37.311012  384344 system_pods.go:89] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:37.311021  384344 system_pods.go:89] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:37.311028  384344 system_pods.go:89] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:37.311034  384344 system_pods.go:89] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:37.311041  384344 system_pods.go:89] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:37.311049  384344 system_pods.go:89] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:37.311060  384344 system_pods.go:89] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:37.311075  384344 system_pods.go:89] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Running
	I1002 12:00:37.311093  384344 system_pods.go:126] duration metric: took 204.035391ms to wait for k8s-apps to be running ...
	I1002 12:00:37.311103  384344 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:00:37.311158  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:37.327711  384344 system_svc.go:56] duration metric: took 16.597865ms WaitForService to wait for kubelet.
	I1002 12:00:37.327736  384344 kubeadm.go:581] duration metric: took 4.604243467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:00:37.327758  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:00:37.506633  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:00:37.506693  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 12:00:37.506708  384344 node_conditions.go:105] duration metric: took 178.94359ms to run NodePressure ...
	I1002 12:00:37.506722  384344 start.go:228] waiting for startup goroutines ...
	I1002 12:00:37.506728  384344 start.go:233] waiting for cluster config update ...
	I1002 12:00:37.506738  384344 start.go:242] writing updated cluster config ...
	I1002 12:00:37.506999  384344 ssh_runner.go:195] Run: rm -f paused
	I1002 12:00:37.558171  384344 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:00:37.560280  384344 out.go:177] * Done! kubectl is now configured to use "no-preload-304121" cluster and "default" namespace by default
	I1002 12:00:34.160478  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:34.160520  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:34.160528  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:34.160540  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:34.160553  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:34.160577  384505 retry.go:31] will retry after 8.056057378s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:42.223209  384505 system_pods.go:86] 5 kube-system pods found
	I1002 12:00:42.223242  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:42.223251  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Pending
	I1002 12:00:42.223257  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:42.223267  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:42.223276  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:42.223299  384505 retry.go:31] will retry after 9.279474557s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:51.510907  384505 system_pods.go:86] 6 kube-system pods found
	I1002 12:00:51.510937  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:51.510945  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:00:51.510949  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Pending
	I1002 12:00:51.510953  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:51.510959  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:51.510965  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:51.510995  384505 retry.go:31] will retry after 9.19295244s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:01:00.712167  384505 system_pods.go:86] 8 kube-system pods found
	I1002 12:01:00.712195  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:01:00.712201  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:01:00.712205  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Running
	I1002 12:01:00.712209  384505 system_pods.go:89] "kube-controller-manager-old-k8s-version-749860" [1531e118-f1f1-485e-b258-32e21b3385d8] Running
	I1002 12:01:00.712213  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:01:00.712217  384505 system_pods.go:89] "kube-scheduler-old-k8s-version-749860" [66983e5c-64ab-48ec-9c24-824f0a7cb36e] Running
	I1002 12:01:00.712223  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:01:00.712230  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:01:00.712237  384505 system_pods.go:126] duration metric: took 55.235875161s to wait for k8s-apps to be running ...
	I1002 12:01:00.712244  384505 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:01:00.712293  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:01:00.728970  384505 system_svc.go:56] duration metric: took 16.712185ms WaitForService to wait for kubelet.
	I1002 12:01:00.728999  384505 kubeadm.go:581] duration metric: took 1m5.625005524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:01:00.729026  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:01:00.733153  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:01:00.733180  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 12:01:00.733196  384505 node_conditions.go:105] duration metric: took 4.162147ms to run NodePressure ...
	I1002 12:01:00.733209  384505 start.go:228] waiting for startup goroutines ...
	I1002 12:01:00.733216  384505 start.go:233] waiting for cluster config update ...
	I1002 12:01:00.733230  384505 start.go:242] writing updated cluster config ...
	I1002 12:01:00.733553  384505 ssh_runner.go:195] Run: rm -f paused
	I1002 12:01:00.784237  384505 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 12:01:00.786178  384505 out.go:177] 
	W1002 12:01:00.787686  384505 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 12:01:00.789104  384505 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 12:01:00.790521  384505 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-749860" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:11 UTC, ends at Mon 2023-10-02 12:08:45 UTC. --
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.286745919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248525286729328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1437c4e5-5cec-44e0-b2fc-8b271141911e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.287330898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=755d843a-62e6-47b5-9fe5-275c85e899dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.287374476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=755d843a-62e6-47b5-9fe5-275c85e899dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.287630249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=755d843a-62e6-47b5-9fe5-275c85e899dc name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.325684892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3785eea-82f0-4dee-9ed2-e14a1178b465 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.325757831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3785eea-82f0-4dee-9ed2-e14a1178b465 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.327081879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b35f6d1c-5e6d-4c84-b8e8-d0450cfad4dc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.327561165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248525327544058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b35f6d1c-5e6d-4c84-b8e8-d0450cfad4dc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.328963766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e140a76-0bd0-4908-a7bb-e2083cfe4016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.329010529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e140a76-0bd0-4908-a7bb-e2083cfe4016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.329157255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e140a76-0bd0-4908-a7bb-e2083cfe4016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.369506837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c084dfaa-239a-4335-95a9-189209e215f3 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.369563510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c084dfaa-239a-4335-95a9-189209e215f3 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.370561957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f0fb4837-1b77-43cc-9d9f-292899438bd8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.370998784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248525370978610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f0fb4837-1b77-43cc-9d9f-292899438bd8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.371914801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e349fb84-5e32-41e4-a0a5-a335a2fbe5fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.371977669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e349fb84-5e32-41e4-a0a5-a335a2fbe5fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.372199393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e349fb84-5e32-41e4-a0a5-a335a2fbe5fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.406967215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=793cd53f-d3a2-42fb-af8d-048c192dcfca name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.407023574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=793cd53f-d3a2-42fb-af8d-048c192dcfca name=/runtime.v1.RuntimeService/Version
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.408403592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0d01dffb-9d8d-417b-bec5-d4f38700fac2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.408909930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248525408893174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0d01dffb-9d8d-417b-bec5-d4f38700fac2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.409575434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4fbdddca-67d9-4f1d-b067-e7bc11fa93fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.409621824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4fbdddca-67d9-4f1d-b067-e7bc11fa93fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:08:45 embed-certs-487027 crio[717]: time="2023-10-02 12:08:45.409771820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4fbdddca-67d9-4f1d-b067-e7bc11fa93fe name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	05b91b88f2551       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3bad86a275e9c       storage-provisioner
	9e6fa9cb90f98       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   bb7a2ea6859d7       coredns-5dd5756b68-qbmwd
	3b6fc3c46243c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   989fa6adc06d4       kube-proxy-6g7f7
	ef0f3434fa2a0       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   acd19a00c9de3       kube-scheduler-embed-certs-487027
	0f90e4456cc7b       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   7727bf964c0c6       kube-controller-manager-embed-certs-487027
	07c200729350d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   959099305ec1e       kube-apiserver-embed-certs-487027
	0ded080ee3bc0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   35b65f1b62246       etcd-embed-certs-487027
	
	* 
	* ==> coredns [9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49947 - 38866 "HINFO IN 5084756394073907370.16678952905601175. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014470802s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-487027
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-487027
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=embed-certs-487027
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:59:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-487027
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:08:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:04:54 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:04:54 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:04:54 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:04:54 +0000   Mon, 02 Oct 2023 11:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.147
	  Hostname:    embed-certs-487027
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba4633ef464748b085c4a648df6d3a93
	  System UUID:                ba4633ef-4647-48b0-85c4-a648df6d3a93
	  Boot ID:                    b0f85ef0-dda5-4a13-9c3e-f60b885e2968
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qbmwd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-487027                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-487027             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-487027    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-6g7f7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-487027             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-hbb5d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node embed-certs-487027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node embed-certs-487027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s                  kubelet          Node embed-certs-487027 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m18s                  kubelet          Node embed-certs-487027 status is now: NodeReady
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-487027 event: Registered Node embed-certs-487027 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.573132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.434180] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138191] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.387286] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.116105] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.101054] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.144939] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.111741] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.260744] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +18.033528] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +21.044818] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 2 11:59] systemd-fstab-generator[3490]: Ignoring "noauto" for root device
	[  +9.807154] systemd-fstab-generator[3813]: Ignoring "noauto" for root device
	[ +14.491002] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899] <==
	* {"level":"info","ts":"2023-10-02T11:59:20.592311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 switched to configuration voters=(6249368398041129799)"}
	{"level":"info","ts":"2023-10-02T11:59:20.594872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"63cf2dc9c47dd9a","local-member-id":"56ba37728bcb2347","added-peer-id":"56ba37728bcb2347","added-peer-peer-urls":["https://192.168.72.147:2380"]}
	{"level":"info","ts":"2023-10-02T11:59:20.601514Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-02T11:59:20.601715Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"56ba37728bcb2347","initial-advertise-peer-urls":["https://192.168.72.147:2380"],"listen-peer-urls":["https://192.168.72.147:2380"],"advertise-client-urls":["https://192.168.72.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-02T11:59:20.601749Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T11:59:20.601891Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.147:2380"}
	{"level":"info","ts":"2023-10-02T11:59:20.601902Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.147:2380"}
	{"level":"info","ts":"2023-10-02T11:59:20.671511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T11:59:20.671715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T11:59:20.671781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 received MsgPreVoteResp from 56ba37728bcb2347 at term 1"}
	{"level":"info","ts":"2023-10-02T11:59:20.67183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 received MsgVoteResp from 56ba37728bcb2347 at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56ba37728bcb2347 elected leader 56ba37728bcb2347 at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.676857Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"56ba37728bcb2347","local-member-attributes":"{Name:embed-certs-487027 ClientURLs:[https://192.168.72.147:2379]}","request-path":"/0/members/56ba37728bcb2347/attributes","cluster-id":"63cf2dc9c47dd9a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:59:20.678183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:59:20.679297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.147:2379"}
	{"level":"info","ts":"2023-10-02T11:59:20.679376Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.679576Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:59:20.685395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:59:20.687845Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63cf2dc9c47dd9a","local-member-id":"56ba37728bcb2347","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.687976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.688014Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.688227Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:59:20.688247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  12:08:45 up 14 min,  0 users,  load average: 0.07, 0.22, 0.20
	Linux embed-certs-487027 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369] <==
	* W1002 12:04:23.842562       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:04:23.842674       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:04:23.842733       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:04:23.842680       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:04:23.842905       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:04:23.844210       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:05:22.687984       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:05:23.842890       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:05:23.842956       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:05:23.842968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:05:23.844906       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:05:23.845006       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:05:23.845034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:06:22.688271       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:07:22.687220       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:07:23.843928       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:07:23.843999       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:07:23.844018       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:07:23.845387       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:07:23.845587       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:07:23.845625       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:08:22.689855       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99] <==
	* I1002 12:03:08.303891       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:03:37.842162       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:03:38.315257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:04:07.848395       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:04:08.325660       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:04:37.855377       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:04:38.334349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:07.880156       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:08.344968       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:37.891604       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:37.959852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.634261ms"
	I1002 12:05:38.355001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:05:49.942546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="252.025µs"
	E1002 12:06:07.898636       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:08.365627       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:06:37.905258       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:38.374598       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:07:07.910779       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:08.384823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:07:37.917245       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:38.393869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:08:07.923572       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:08:08.406912       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:08:37.931386       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:08:38.416003       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325] <==
	* I1002 11:59:40.672519       1 server_others.go:69] "Using iptables proxy"
	I1002 11:59:40.735143       1 node.go:141] Successfully retrieved node IP: 192.168.72.147
	I1002 11:59:41.400126       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:59:41.400220       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:59:41.650617       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:59:41.727575       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:59:41.837904       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:59:41.838608       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:59:41.839397       1 config.go:315] "Starting node config controller"
	I1002 11:59:41.839489       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:59:41.844430       1 config.go:188] "Starting service config controller"
	I1002 11:59:41.844692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:59:41.844932       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:59:41.844960       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:59:41.940404       1 shared_informer.go:318] Caches are synced for node config
	I1002 11:59:41.946866       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:59:41.947369       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096] <==
	* W1002 11:59:23.737260       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 11:59:23.737413       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:59:23.753052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:23.753218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:23.780048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:23.780137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:23.809269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:23.809364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 11:59:23.864158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:23.864274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 11:59:23.894101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:23.894167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 11:59:23.904849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:23.904905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 11:59:24.037416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:24.037644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 11:59:24.061581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:24.061725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 11:59:24.120600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:24.120717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:24.150159       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 11:59:24.150397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 11:59:24.231051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:24.231286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1002 11:59:25.862728       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:11 UTC, ends at Mon 2023-10-02 12:08:45 UTC. --
	Oct 02 12:06:02 embed-certs-487027 kubelet[3820]: E1002 12:06:02.923672    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:06:16 embed-certs-487027 kubelet[3820]: E1002 12:06:16.925362    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:06:27 embed-certs-487027 kubelet[3820]: E1002 12:06:27.026057    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:06:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:06:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:06:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:06:27 embed-certs-487027 kubelet[3820]: E1002 12:06:27.924056    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:06:41 embed-certs-487027 kubelet[3820]: E1002 12:06:41.923420    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:06:55 embed-certs-487027 kubelet[3820]: E1002 12:06:55.922285    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:07:10 embed-certs-487027 kubelet[3820]: E1002 12:07:10.923395    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:07:24 embed-certs-487027 kubelet[3820]: E1002 12:07:24.923912    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:07:27 embed-certs-487027 kubelet[3820]: E1002 12:07:27.022140    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:07:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:07:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:07:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:07:35 embed-certs-487027 kubelet[3820]: E1002 12:07:35.924297    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:07:47 embed-certs-487027 kubelet[3820]: E1002 12:07:47.923009    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:08:02 embed-certs-487027 kubelet[3820]: E1002 12:08:02.924600    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:08:16 embed-certs-487027 kubelet[3820]: E1002 12:08:16.924552    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:08:27 embed-certs-487027 kubelet[3820]: E1002 12:08:27.023669    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:08:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:08:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:08:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:08:30 embed-certs-487027 kubelet[3820]: E1002 12:08:30.923774    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:08:44 embed-certs-487027 kubelet[3820]: E1002 12:08:44.923167    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	
	* 
	* ==> storage-provisioner [05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c] <==
	* I1002 11:59:42.623292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:59:42.648995       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:59:42.649117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:59:42.665278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:59:42.665534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46!
	I1002 11:59:42.668384       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d9281a0-87b8-4f66-90c1-c2b68898007d", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46 became leader
	I1002 11:59:42.767555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487027 -n embed-certs-487027
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-487027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-hbb5d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d: exit status 1 (70.434871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-hbb5d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 12:00:54.840532  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-304121 -n no-preload-304121
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:09:38.127008206 +0000 UTC m=+5628.698694255
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-304121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-304121 logs -n 25: (1.724837952s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo cat                              | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:14.045882  384965 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:14.045995  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046005  384965 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:14.046009  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046207  384965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:50:14.046807  384965 out.go:303] Setting JSON to false
	I1002 11:50:14.047867  384965 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9160,"bootTime":1696238254,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:50:14.047937  384965 start.go:138] virtualization: kvm guest
	I1002 11:50:14.050148  384965 out.go:177] * [default-k8s-diff-port-777999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:50:14.051736  384965 notify.go:220] Checking for updates...
	I1002 11:50:14.051738  384965 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:14.053419  384965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:14.055001  384965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:50:14.056531  384965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:50:14.057828  384965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:50:14.059154  384965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:14.060884  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:50:14.061318  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.061365  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.077285  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1002 11:50:14.077670  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.078164  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.078184  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.078590  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.078766  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.079011  384965 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:14.079285  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.079321  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.093519  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 11:50:14.093897  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.094331  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.094375  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.094689  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.094875  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.127852  384965 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:50:14.129579  384965 start.go:298] selected driver: kvm2
	I1002 11:50:14.129589  384965 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.129734  384965 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:14.130441  384965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.130517  384965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:50:14.145313  384965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:50:14.145678  384965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:14.145737  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:50:14.145747  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:50:14.145754  384965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-77799
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.145885  384965 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.147697  384965 out.go:177] * Starting control plane node default-k8s-diff-port-777999 in cluster default-k8s-diff-port-777999
	I1002 11:50:14.518571  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:14.149188  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:50:14.149229  384965 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:50:14.149243  384965 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:14.149342  384965 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:50:14.149355  384965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:50:14.149469  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:50:14.149690  384965 start.go:365] acquiring machines lock for default-k8s-diff-port-777999: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:50:17.590603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:23.670608  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:26.742637  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:32.822640  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:35.894704  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:41.974682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:45.046703  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:51.126633  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:54.198624  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:00.278622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:03.350650  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:09.430627  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:12.502639  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:18.582668  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:21.654622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:27.734588  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:30.806674  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:36.886711  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:39.958677  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:46.038638  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:49.110583  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:55.190669  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:58.262632  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:04.342658  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:07.414733  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:13.494648  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:16.566610  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:22.646664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:25.718682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:31.798673  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:34.870620  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:40.950664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:44.022695  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:50.102629  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:53.174698  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:59.254603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:02.326684  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:08.406661  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:11.478769  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:17.558670  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:20.630696  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:26.710600  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:29.782676  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:35.862655  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:38.867149  384505 start.go:369] acquired machines lock for "old-k8s-version-749860" in 4m24.621828644s
	I1002 11:53:38.867251  384505 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:38.867260  384505 fix.go:54] fixHost starting: 
	I1002 11:53:38.867725  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:38.867761  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:38.882900  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1002 11:53:38.883484  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:38.883950  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:53:38.883974  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:38.884318  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:38.884530  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:38.884688  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:53:38.886067  384505 fix.go:102] recreateIfNeeded on old-k8s-version-749860: state=Stopped err=<nil>
	I1002 11:53:38.886102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	W1002 11:53:38.886288  384505 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:38.888401  384505 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-749860" ...
	I1002 11:53:38.889752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Start
	I1002 11:53:38.889924  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring networks are active...
	I1002 11:53:38.890638  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network default is active
	I1002 11:53:38.890980  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network mk-old-k8s-version-749860 is active
	I1002 11:53:38.891314  384505 main.go:141] libmachine: (old-k8s-version-749860) Getting domain xml...
	I1002 11:53:38.892257  384505 main.go:141] libmachine: (old-k8s-version-749860) Creating domain...
	I1002 11:53:38.864675  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:38.864716  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:53:38.866979  384344 machine.go:91] provisioned docker machine in 4m37.398507067s
	I1002 11:53:38.867033  384344 fix.go:56] fixHost completed within 4m37.419547722s
	I1002 11:53:38.867039  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 4m37.419568347s
	W1002 11:53:38.867080  384344 start.go:688] error starting host: provision: host is not running
	W1002 11:53:38.867230  384344 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:53:38.867240  384344 start.go:703] Will try again in 5 seconds ...
	I1002 11:53:40.120018  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting to get IP...
	I1002 11:53:40.120927  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.121258  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.121366  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.121241  385500 retry.go:31] will retry after 204.223254ms: waiting for machine to come up
	I1002 11:53:40.326895  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.327332  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.327351  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.327293  385500 retry.go:31] will retry after 300.58131ms: waiting for machine to come up
	I1002 11:53:40.629931  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.630293  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.630324  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.630247  385500 retry.go:31] will retry after 460.804681ms: waiting for machine to come up
	I1002 11:53:41.092440  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.092887  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.092914  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.092838  385500 retry.go:31] will retry after 573.592817ms: waiting for machine to come up
	I1002 11:53:41.668507  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.668916  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.668955  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.668879  385500 retry.go:31] will retry after 647.261387ms: waiting for machine to come up
	I1002 11:53:42.317738  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.318193  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.318228  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.318135  385500 retry.go:31] will retry after 643.115699ms: waiting for machine to come up
	I1002 11:53:42.963169  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.963572  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.963595  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.963517  385500 retry.go:31] will retry after 1.059074571s: waiting for machine to come up
	I1002 11:53:44.024372  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:44.024750  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:44.024785  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:44.024703  385500 retry.go:31] will retry after 1.142402067s: waiting for machine to come up
	I1002 11:53:43.868857  384344 start.go:365] acquiring machines lock for no-preload-304121: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:53:45.169146  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:45.169470  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:45.169509  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:45.169430  385500 retry.go:31] will retry after 1.244757741s: waiting for machine to come up
	I1002 11:53:46.415640  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:46.416049  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:46.416078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:46.416030  385500 retry.go:31] will retry after 2.066150597s: waiting for machine to come up
	I1002 11:53:48.483477  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:48.483998  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:48.484023  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:48.483921  385500 retry.go:31] will retry after 2.521584671s: waiting for machine to come up
	I1002 11:53:51.008090  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:51.008535  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:51.008565  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:51.008455  385500 retry.go:31] will retry after 2.896131667s: waiting for machine to come up
	I1002 11:53:53.905835  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:53.906274  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:53.906309  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:53.906207  385500 retry.go:31] will retry after 3.463250216s: waiting for machine to come up
	I1002 11:53:58.755219  384787 start.go:369] acquired machines lock for "embed-certs-487027" in 4m10.971064405s
	I1002 11:53:58.755286  384787 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:58.755301  384787 fix.go:54] fixHost starting: 
	I1002 11:53:58.755691  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:58.755733  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:58.772186  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38267
	I1002 11:53:58.772591  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:58.773071  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:53:58.773101  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:58.773409  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:58.773585  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:53:58.773710  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:53:58.775231  384787 fix.go:102] recreateIfNeeded on embed-certs-487027: state=Stopped err=<nil>
	I1002 11:53:58.775273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	W1002 11:53:58.775449  384787 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:58.778132  384787 out.go:177] * Restarting existing kvm2 VM for "embed-certs-487027" ...
	I1002 11:53:57.373844  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374176  384505 main.go:141] libmachine: (old-k8s-version-749860) Found IP for machine: 192.168.83.82
	I1002 11:53:57.374195  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserving static IP address...
	I1002 11:53:57.374208  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has current primary IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374680  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.374711  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | skip adding static IP to network mk-old-k8s-version-749860 - found existing host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"}
	I1002 11:53:57.374725  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserved static IP address: 192.168.83.82
	I1002 11:53:57.374741  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting for SSH to be available...
	I1002 11:53:57.374758  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Getting to WaitForSSH function...
	I1002 11:53:57.377368  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377757  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.377791  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377890  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH client type: external
	I1002 11:53:57.377933  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa (-rw-------)
	I1002 11:53:57.377976  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:53:57.377995  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | About to run SSH command:
	I1002 11:53:57.378008  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | exit 0
	I1002 11:53:57.474496  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | SSH cmd err, output: <nil>: 
	I1002 11:53:57.474881  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetConfigRaw
	I1002 11:53:57.475581  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.478078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478423  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.478464  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478679  384505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:53:57.478876  384505 machine.go:88] provisioning docker machine ...
	I1002 11:53:57.478895  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:57.479118  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479286  384505 buildroot.go:166] provisioning hostname "old-k8s-version-749860"
	I1002 11:53:57.479300  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479509  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.481462  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481768  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.481805  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481935  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.482138  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482280  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.482611  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.483038  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.483051  384505 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-749860 && echo "old-k8s-version-749860" | sudo tee /etc/hostname
	I1002 11:53:57.622724  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-749860
	
	I1002 11:53:57.622760  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.626222  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626663  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.626707  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626840  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.627102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627297  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627513  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.627678  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.628068  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.628089  384505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-749860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-749860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-749860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:53:57.767587  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:57.767664  384505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:53:57.767708  384505 buildroot.go:174] setting up certificates
	I1002 11:53:57.767721  384505 provision.go:83] configureAuth start
	I1002 11:53:57.767734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.768045  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.771158  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771591  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.771620  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771825  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.774031  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774444  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.774523  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774529  384505 provision.go:138] copyHostCerts
	I1002 11:53:57.774608  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:53:57.774623  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:53:57.774695  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:53:57.774787  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:53:57.774797  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:53:57.774821  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:53:57.774884  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:53:57.774891  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:53:57.774912  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:53:57.774970  384505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-749860 san=[192.168.83.82 192.168.83.82 localhost 127.0.0.1 minikube old-k8s-version-749860]
	I1002 11:53:58.003098  384505 provision.go:172] copyRemoteCerts
	I1002 11:53:58.003163  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:53:58.003190  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.005944  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006310  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.006345  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006482  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.006734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.006887  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.007049  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.099927  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:53:58.123424  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:53:58.147578  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:53:58.171190  384505 provision.go:86] duration metric: configureAuth took 403.448571ms
	I1002 11:53:58.171228  384505 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:53:58.171440  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:53:58.171575  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.174314  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174684  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.174723  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174860  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.175078  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175274  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175409  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.175596  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.175908  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.175923  384505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:53:58.491028  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:53:58.491062  384505 machine.go:91] provisioned docker machine in 1.012168334s
	I1002 11:53:58.491072  384505 start.go:300] post-start starting for "old-k8s-version-749860" (driver="kvm2")
	I1002 11:53:58.491085  384505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:53:58.491106  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.491521  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:53:58.491558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.494009  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494382  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.494415  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494546  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.494753  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.494903  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.495037  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.588465  384505 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:53:58.592844  384505 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:53:58.592872  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:53:58.592940  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:53:58.593047  384505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:53:58.593171  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:53:58.601583  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:53:58.624453  384505 start.go:303] post-start completed in 133.365398ms
	I1002 11:53:58.624486  384505 fix.go:56] fixHost completed within 19.757224844s
	I1002 11:53:58.624511  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.627104  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627476  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.627534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627695  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.627913  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628105  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628253  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.628426  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.628749  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.628762  384505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:53:58.755032  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247638.703145377
	
	I1002 11:53:58.755056  384505 fix.go:206] guest clock: 1696247638.703145377
	I1002 11:53:58.755066  384505 fix.go:219] Guest: 2023-10-02 11:53:58.703145377 +0000 UTC Remote: 2023-10-02 11:53:58.624490602 +0000 UTC m=+284.515069275 (delta=78.654775ms)
	I1002 11:53:58.755092  384505 fix.go:190] guest clock delta is within tolerance: 78.654775ms
	I1002 11:53:58.755098  384505 start.go:83] releasing machines lock for "old-k8s-version-749860", held for 19.887910329s
	I1002 11:53:58.755126  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.755438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:58.758172  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758431  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.758467  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758673  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759288  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759466  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759560  384505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:53:58.759620  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.759717  384505 ssh_runner.go:195] Run: cat /version.json
	I1002 11:53:58.759748  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.762471  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762618  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762847  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762879  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762911  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762943  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.763162  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763185  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763347  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763363  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763487  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763661  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.763671  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763828  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.880436  384505 ssh_runner.go:195] Run: systemctl --version
	I1002 11:53:58.886540  384505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:53:59.035347  384505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:53:59.041510  384505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:53:59.041604  384505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:53:59.056030  384505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:53:59.056062  384505 start.go:469] detecting cgroup driver to use...
	I1002 11:53:59.056147  384505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:53:59.068680  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:53:59.080770  384505 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:53:59.080823  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:53:59.093059  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:53:59.106603  384505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:53:59.223135  384505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:53:59.364085  384505 docker.go:213] disabling docker service ...
	I1002 11:53:59.364161  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:53:59.378131  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:53:59.390380  384505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:53:59.522236  384505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:53:59.663336  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:53:59.677221  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:53:59.694283  384505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:53:59.694380  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.703409  384505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:53:59.703481  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.712316  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.721255  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.731204  384505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:53:59.741152  384505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:53:59.748978  384505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:53:59.749036  384505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:53:59.761692  384505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:53:59.770571  384505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:53:59.882809  384505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:00.046741  384505 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:00.046843  384505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:00.051911  384505 start.go:537] Will wait 60s for crictl version
	I1002 11:54:00.051988  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:00.055847  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:00.099999  384505 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:00.100084  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.155271  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.202213  384505 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1002 11:53:58.780030  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Start
	I1002 11:53:58.780201  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring networks are active...
	I1002 11:53:58.780857  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network default is active
	I1002 11:53:58.781206  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network mk-embed-certs-487027 is active
	I1002 11:53:58.781581  384787 main.go:141] libmachine: (embed-certs-487027) Getting domain xml...
	I1002 11:53:58.782269  384787 main.go:141] libmachine: (embed-certs-487027) Creating domain...
	I1002 11:54:00.079808  384787 main.go:141] libmachine: (embed-certs-487027) Waiting to get IP...
	I1002 11:54:00.080676  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.081052  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.081202  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.081070  385615 retry.go:31] will retry after 291.88616ms: waiting for machine to come up
	I1002 11:54:00.374941  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.375493  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.375526  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.375441  385615 retry.go:31] will retry after 315.924643ms: waiting for machine to come up
	I1002 11:54:00.693196  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.693804  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.693840  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.693754  385615 retry.go:31] will retry after 473.967353ms: waiting for machine to come up
	I1002 11:54:01.169616  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.170137  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.170168  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.170099  385615 retry.go:31] will retry after 490.884713ms: waiting for machine to come up
	I1002 11:54:01.662881  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.663427  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.663459  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.663380  385615 retry.go:31] will retry after 590.285109ms: waiting for machine to come up
	I1002 11:54:02.255409  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.256020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.256048  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.255956  385615 retry.go:31] will retry after 586.734935ms: waiting for machine to come up
	I1002 11:54:00.203709  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:54:00.206822  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207269  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:54:00.207308  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207533  384505 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:00.211596  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:00.224503  384505 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:54:00.224558  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:00.267915  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:00.267986  384505 ssh_runner.go:195] Run: which lz4
	I1002 11:54:00.272086  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:00.276281  384505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:00.276322  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1002 11:54:02.169153  384505 crio.go:444] Took 1.897111 seconds to copy over tarball
	I1002 11:54:02.169248  384505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:02.844615  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.845091  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.845129  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.845049  385615 retry.go:31] will retry after 765.906555ms: waiting for machine to come up
	I1002 11:54:03.612904  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:03.613374  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:03.613515  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:03.613306  385615 retry.go:31] will retry after 1.240249135s: waiting for machine to come up
	I1002 11:54:04.855370  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:04.855832  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:04.855858  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:04.855785  385615 retry.go:31] will retry after 1.741253702s: waiting for machine to come up
	I1002 11:54:06.599800  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:06.600279  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:06.600307  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:06.600221  385615 retry.go:31] will retry after 1.945988456s: waiting for machine to come up
	I1002 11:54:05.257359  384505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088072266s)
	I1002 11:54:05.257395  384505 crio.go:451] Took 3.088214 seconds to extract the tarball
	I1002 11:54:05.257408  384505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:05.296693  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:05.347131  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:05.347156  384505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:54:05.347231  384505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.347239  384505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.347291  384505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.347523  384505 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.347545  384505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.347590  384505 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:54:05.347712  384505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.347797  384505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349061  384505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.349109  384505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.349136  384505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.349165  384505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349072  384505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.349076  384505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.349075  384505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.349490  384505 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:54:05.494581  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.497665  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.499676  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.503426  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 11:54:05.504502  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.507776  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.511534  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.589967  384505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1002 11:54:05.590038  384505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.590101  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.653382  384505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1002 11:54:05.653450  384505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.653539  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674391  384505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1002 11:54:05.674430  384505 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 11:54:05.674447  384505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.674467  384505 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1002 11:54:05.674508  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674498  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674583  384505 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1002 11:54:05.674621  384505 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.674671  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.676359  384505 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1002 11:54:05.676390  384505 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.676425  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.680824  384505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1002 11:54:05.680858  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.680871  384505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.680894  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.680905  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.682827  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1002 11:54:05.690404  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.690496  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.690562  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.810224  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1002 11:54:05.840439  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1002 11:54:05.840472  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.840535  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:54:05.840544  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1002 11:54:05.840583  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1002 11:54:05.840643  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.840663  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1002 11:54:05.874997  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1002 11:54:05.875049  384505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1002 11:54:05.875079  384505 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.875136  384505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1002 11:54:06.317119  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:07.926701  384505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.609537315s)
	I1002 11:54:07.926715  384505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.051548545s)
	I1002 11:54:07.926786  384505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 11:54:07.926855  384505 cache_images.go:92] LoadImages completed in 2.579686998s
	W1002 11:54:07.926953  384505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1002 11:54:07.927077  384505 ssh_runner.go:195] Run: crio config
	I1002 11:54:07.991410  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:07.991433  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:07.991452  384505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:07.991473  384505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.82 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-749860 NodeName:old-k8s-version-749860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:54:07.991665  384505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-749860"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-749860
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.82:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:07.991752  384505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-749860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:07.991814  384505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1002 11:54:08.002239  384505 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:08.002313  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:08.012375  384505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1002 11:54:08.031554  384505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:08.050801  384505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1002 11:54:08.068326  384505 ssh_runner.go:195] Run: grep 192.168.83.82	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:08.072798  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:08.085261  384505 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860 for IP: 192.168.83.82
	I1002 11:54:08.085320  384505 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:08.085511  384505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:08.085555  384505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:08.085682  384505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/client.key
	I1002 11:54:08.085771  384505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key.bc78c23c
	I1002 11:54:08.085823  384505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key
	I1002 11:54:08.085973  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:08.086020  384505 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:08.086035  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:08.086071  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:08.086101  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:08.086163  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:08.086237  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:08.087038  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:08.111230  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:08.133515  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:08.157382  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:08.180186  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:08.210075  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:08.232068  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:08.253873  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:08.276866  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:08.300064  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:08.322265  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:08.346808  384505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:08.367194  384505 ssh_runner.go:195] Run: openssl version
	I1002 11:54:08.374709  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:08.389274  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395338  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395420  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.401338  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:08.412228  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:08.423293  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428146  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428213  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.434177  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:08.449342  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:08.463678  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468723  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468795  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.476711  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:08.492116  384505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:08.498510  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:08.504961  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:08.513012  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:08.520620  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:08.528578  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:08.534685  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:08.541262  384505 kubeadm.go:404] StartCluster: {Name:old-k8s-version-749860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:08.541401  384505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:08.541474  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:08.579821  384505 cri.go:89] found id: ""
	I1002 11:54:08.579899  384505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:08.590328  384505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:08.590359  384505 kubeadm.go:636] restartCluster start
	I1002 11:54:08.590419  384505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:08.600034  384505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.601660  384505 kubeconfig.go:92] found "old-k8s-version-749860" server: "https://192.168.83.82:8443"
	I1002 11:54:08.605641  384505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:08.615274  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.615340  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.630952  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.630979  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.631032  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.642433  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.547687  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:08.548295  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:08.548331  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:08.548238  385615 retry.go:31] will retry after 2.817726625s: waiting for machine to come up
	I1002 11:54:11.367346  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:11.367909  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:11.367943  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:11.367859  385615 retry.go:31] will retry after 3.066326625s: waiting for machine to come up
	I1002 11:54:09.142569  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.155937  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:09.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.642637  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.655230  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.142683  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.142769  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.155206  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.642757  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.642857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.659345  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.142860  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.142955  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.158336  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.642849  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.642934  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.658819  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.143538  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.143645  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.159984  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.642679  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.658031  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.143496  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.159279  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.643567  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.643659  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.657189  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.435299  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:14.435744  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:14.435777  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:14.435699  385615 retry.go:31] will retry after 3.446313194s: waiting for machine to come up
	I1002 11:54:19.007568  384965 start.go:369] acquired machines lock for "default-k8s-diff-port-777999" in 4m4.857829673s
	I1002 11:54:19.007726  384965 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:19.007735  384965 fix.go:54] fixHost starting: 
	I1002 11:54:19.008181  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:19.008225  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:19.025286  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1002 11:54:19.025755  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:19.026243  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:54:19.026265  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:19.026648  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:19.026869  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:19.027056  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:54:19.028773  384965 fix.go:102] recreateIfNeeded on default-k8s-diff-port-777999: state=Stopped err=<nil>
	I1002 11:54:19.028799  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	W1002 11:54:19.028984  384965 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:19.031466  384965 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-777999" ...
	I1002 11:54:19.033140  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Start
	I1002 11:54:19.033346  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring networks are active...
	I1002 11:54:19.034009  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network default is active
	I1002 11:54:19.034440  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network mk-default-k8s-diff-port-777999 is active
	I1002 11:54:19.034843  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Getting domain xml...
	I1002 11:54:19.035519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Creating domain...
	I1002 11:54:14.142550  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.142618  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.154742  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.643429  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.643522  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.656075  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.142577  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.142669  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.154422  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.643360  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.643450  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.655255  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.142806  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.142948  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.154896  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.643505  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.643581  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.655413  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.142981  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.143087  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.156411  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.642996  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.643100  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.656886  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.143481  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:18.143563  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:18.157184  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.616095  384505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:18.616128  384505 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:18.616142  384505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:18.616204  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:18.654952  384505 cri.go:89] found id: ""
	I1002 11:54:18.655033  384505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:18.674155  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:18.685052  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:18.685116  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695816  384505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695844  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:18.821270  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:17.886333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.886895  384787 main.go:141] libmachine: (embed-certs-487027) Found IP for machine: 192.168.72.147
	I1002 11:54:17.886926  384787 main.go:141] libmachine: (embed-certs-487027) Reserving static IP address...
	I1002 11:54:17.886947  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has current primary IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.887365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.887396  384787 main.go:141] libmachine: (embed-certs-487027) DBG | skip adding static IP to network mk-embed-certs-487027 - found existing host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"}
	I1002 11:54:17.887404  384787 main.go:141] libmachine: (embed-certs-487027) Reserved static IP address: 192.168.72.147
	I1002 11:54:17.887420  384787 main.go:141] libmachine: (embed-certs-487027) Waiting for SSH to be available...
	I1002 11:54:17.887437  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Getting to WaitForSSH function...
	I1002 11:54:17.889775  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890175  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.890214  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890410  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH client type: external
	I1002 11:54:17.890434  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa (-rw-------)
	I1002 11:54:17.890470  384787 main.go:141] libmachine: (embed-certs-487027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:17.890502  384787 main.go:141] libmachine: (embed-certs-487027) DBG | About to run SSH command:
	I1002 11:54:17.890514  384787 main.go:141] libmachine: (embed-certs-487027) DBG | exit 0
	I1002 11:54:17.974015  384787 main.go:141] libmachine: (embed-certs-487027) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:17.974444  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetConfigRaw
	I1002 11:54:17.975209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:17.977468  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.977798  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.977837  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.978016  384787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:54:17.978201  384787 machine.go:88] provisioning docker machine ...
	I1002 11:54:17.978220  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:17.978460  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978651  384787 buildroot.go:166] provisioning hostname "embed-certs-487027"
	I1002 11:54:17.978669  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:17.980872  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981298  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.981333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981395  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:17.981587  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981746  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981885  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:17.982020  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:17.982399  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:17.982413  384787 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-487027 && echo "embed-certs-487027" | sudo tee /etc/hostname
	I1002 11:54:18.103274  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-487027
	
	I1002 11:54:18.103311  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.106230  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106654  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.106709  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106847  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.107082  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107266  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107400  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.107589  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.108051  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.108081  384787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-487027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-487027/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-487027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:18.222398  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:18.222431  384787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:18.222453  384787 buildroot.go:174] setting up certificates
	I1002 11:54:18.222488  384787 provision.go:83] configureAuth start
	I1002 11:54:18.222500  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:18.222817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:18.225631  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.226150  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226262  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.228719  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229096  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.229130  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229268  384787 provision.go:138] copyHostCerts
	I1002 11:54:18.229336  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:18.229351  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:18.229399  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:18.229480  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:18.229492  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:18.229511  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:18.229563  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:18.229570  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:18.229586  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:18.229630  384787 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-487027 san=[192.168.72.147 192.168.72.147 localhost 127.0.0.1 minikube embed-certs-487027]
	I1002 11:54:18.296130  384787 provision.go:172] copyRemoteCerts
	I1002 11:54:18.296187  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:18.296212  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.298721  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299036  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.299059  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299181  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.299363  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.299479  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.299628  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.384449  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:18.406096  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:18.427407  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 11:54:18.448829  384787 provision.go:86] duration metric: configureAuth took 226.314252ms
	I1002 11:54:18.448858  384787 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:18.449065  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:18.449178  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.451995  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.452405  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452596  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.452786  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.452958  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.453077  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.453213  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.453571  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.453606  384787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:18.754879  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:18.754913  384787 machine.go:91] provisioned docker machine in 776.69782ms
	I1002 11:54:18.754927  384787 start.go:300] post-start starting for "embed-certs-487027" (driver="kvm2")
	I1002 11:54:18.754941  384787 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:18.754966  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:18.755361  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:18.755392  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.758184  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758644  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.758700  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758788  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.758981  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.759149  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.759414  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.847614  384787 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:18.851792  384787 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:18.851821  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:18.851911  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:18.852023  384787 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:18.852152  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:18.861415  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:18.883190  384787 start.go:303] post-start completed in 128.242372ms
	I1002 11:54:18.883222  384787 fix.go:56] fixHost completed within 20.127922888s
	I1002 11:54:18.883249  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.885771  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.886141  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886335  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.886598  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886784  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886922  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.887111  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.887556  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.887574  384787 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:19.007352  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247658.948838951
	
	I1002 11:54:19.007388  384787 fix.go:206] guest clock: 1696247658.948838951
	I1002 11:54:19.007404  384787 fix.go:219] Guest: 2023-10-02 11:54:18.948838951 +0000 UTC Remote: 2023-10-02 11:54:18.883226893 +0000 UTC m=+271.237550126 (delta=65.612058ms)
	I1002 11:54:19.007464  384787 fix.go:190] guest clock delta is within tolerance: 65.612058ms
	I1002 11:54:19.007471  384787 start.go:83] releasing machines lock for "embed-certs-487027", held for 20.25221392s
	I1002 11:54:19.007510  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.007831  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:19.011020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011386  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.011418  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011602  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012303  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012520  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012602  384787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:19.012660  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.012946  384787 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:19.012976  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.015652  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.015935  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016016  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016063  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016284  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016411  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016439  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016482  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016638  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016653  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.016868  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016871  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.017017  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.017199  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.124634  384787 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:19.130340  384787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:19.278814  384787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:19.284549  384787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:19.284618  384787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:19.300872  384787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:19.300896  384787 start.go:469] detecting cgroup driver to use...
	I1002 11:54:19.300984  384787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:19.314898  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:19.327762  384787 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:19.327826  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:19.341164  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:19.354542  384787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:19.469125  384787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:19.581195  384787 docker.go:213] disabling docker service ...
	I1002 11:54:19.581260  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:19.595222  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:19.607587  384787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:19.725376  384787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:19.828507  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:19.845782  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:19.868464  384787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:19.868530  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.881554  384787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:19.881633  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.894090  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.905922  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.918336  384787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:19.931259  384787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:19.939861  384787 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:19.939925  384787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:19.954089  384787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:19.966438  384787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:20.124666  384787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:20.329505  384787 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:20.329602  384787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:20.336428  384787 start.go:537] Will wait 60s for crictl version
	I1002 11:54:20.336499  384787 ssh_runner.go:195] Run: which crictl
	I1002 11:54:20.343269  384787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:20.386249  384787 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:20.386331  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.429634  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.476699  384787 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:20.478035  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:20.480720  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481028  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:20.481054  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481230  384787 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:20.485387  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:20.496957  384787 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:20.497028  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:20.539655  384787 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:20.539731  384787 ssh_runner.go:195] Run: which lz4
	I1002 11:54:20.543869  384787 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:20.548080  384787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:20.548112  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:22.411067  384787 crio.go:444] Took 1.867223 seconds to copy over tarball
	I1002 11:54:22.411155  384787 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:20.416319  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting to get IP...
	I1002 11:54:20.417168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417613  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.417539  385761 retry.go:31] will retry after 211.341658ms: waiting for machine to come up
	I1002 11:54:20.631097  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.631841  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.632011  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.631972  385761 retry.go:31] will retry after 257.651992ms: waiting for machine to come up
	I1002 11:54:20.891519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892077  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892111  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.892047  385761 retry.go:31] will retry after 295.599576ms: waiting for machine to come up
	I1002 11:54:21.189739  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190333  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190389  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.190275  385761 retry.go:31] will retry after 532.182463ms: waiting for machine to come up
	I1002 11:54:21.723822  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724414  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724443  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.724314  385761 retry.go:31] will retry after 576.235756ms: waiting for machine to come up
	I1002 11:54:22.301975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302566  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302600  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:22.302479  385761 retry.go:31] will retry after 913.441142ms: waiting for machine to come up
	I1002 11:54:23.217419  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217905  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217943  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:23.217839  385761 retry.go:31] will retry after 1.089960204s: waiting for machine to come up
	I1002 11:54:19.625761  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.857853  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.977490  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:20.080170  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:20.080294  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.097093  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.611090  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.110857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.610499  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.111420  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.138171  384505 api_server.go:72] duration metric: took 2.057999603s to wait for apiserver process to appear ...
	I1002 11:54:22.138201  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:22.138224  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:25.604442  384787 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193244457s)
	I1002 11:54:25.604543  384787 crio.go:451] Took 3.193443 seconds to extract the tarball
	I1002 11:54:25.604568  384787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:25.660515  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:25.723308  384787 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:25.723339  384787 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:25.723436  384787 ssh_runner.go:195] Run: crio config
	I1002 11:54:25.781690  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:25.781722  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:25.781748  384787 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:25.781775  384787 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-487027 NodeName:embed-certs-487027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:25.782020  384787 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-487027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:25.782125  384787 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-487027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:25.782183  384787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:25.791322  384787 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:25.791398  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:25.799709  384787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 11:54:25.818900  384787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:25.836913  384787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1002 11:54:25.856201  384787 ssh_runner.go:195] Run: grep 192.168.72.147	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:25.859962  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:25.872776  384787 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027 for IP: 192.168.72.147
	I1002 11:54:25.872818  384787 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:25.873061  384787 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:25.873125  384787 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:25.873225  384787 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/client.key
	I1002 11:54:25.873312  384787 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key.b24df18b
	I1002 11:54:25.873375  384787 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key
	I1002 11:54:25.873530  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:25.873590  384787 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:25.873602  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:25.873633  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:25.873667  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:25.873702  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:25.873757  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:25.874732  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:25.901588  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:25.929381  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:25.955358  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:25.980414  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:26.008652  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:26.038061  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:26.067828  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:26.098717  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:26.131030  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:26.162989  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:26.189458  384787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:26.206791  384787 ssh_runner.go:195] Run: openssl version
	I1002 11:54:26.214436  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:26.226064  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231428  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231504  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.238070  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:26.252779  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:26.267263  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272245  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272316  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.278088  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:26.289430  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:26.300788  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305731  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305812  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.311712  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:26.322855  384787 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:26.328688  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:26.336570  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:26.344412  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:26.350583  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:26.356815  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:26.364674  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:26.372219  384787 kubeadm.go:404] StartCluster: {Name:embed-certs-487027 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:26.372341  384787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:26.372397  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:26.424018  384787 cri.go:89] found id: ""
	I1002 11:54:26.424131  384787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:26.435493  384787 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:26.435520  384787 kubeadm.go:636] restartCluster start
	I1002 11:54:26.435583  384787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:26.447429  384787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.448848  384787 kubeconfig.go:92] found "embed-certs-487027" server: "https://192.168.72.147:8443"
	I1002 11:54:26.452474  384787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:26.462854  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.462924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.475723  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.475751  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.475803  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.488962  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.989693  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.989776  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.002889  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:27.489487  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.489589  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.503912  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:24.308867  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309362  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:24.309326  385761 retry.go:31] will retry after 1.381170872s: waiting for machine to come up
	I1002 11:54:25.691931  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692285  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692386  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:25.692267  385761 retry.go:31] will retry after 1.748966707s: waiting for machine to come up
	I1002 11:54:27.442708  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443145  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443171  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:27.443107  385761 retry.go:31] will retry after 2.105420589s: waiting for machine to come up
	I1002 11:54:27.138701  384505 api_server.go:269] stopped: https://192.168.83.82:8443/healthz: Get "https://192.168.83.82:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 11:54:27.138757  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.249499  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:28.249540  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:28.750389  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.756351  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:28.756390  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.250308  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.257228  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:29.257264  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.750123  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.758475  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 11:54:29.769049  384505 api_server.go:141] control plane version: v1.16.0
	I1002 11:54:29.769079  384505 api_server.go:131] duration metric: took 7.630868963s to wait for apiserver health ...
	I1002 11:54:29.769098  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:29.769107  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:29.770969  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:27.989735  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.989861  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.007059  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.489495  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.489605  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.505845  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.989879  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.989963  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.004220  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.489847  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.489949  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.502986  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.989170  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.989264  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.006850  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.489389  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.489504  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.502094  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.989302  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.989399  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.005902  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.489967  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.490080  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.503748  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.989317  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.989405  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.003288  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:32.489803  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.489924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.506744  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.550027  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550550  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:29.550488  385761 retry.go:31] will retry after 2.509962026s: waiting for machine to come up
	I1002 11:54:32.063392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063862  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063887  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:32.063834  385761 retry.go:31] will retry after 2.845339865s: waiting for machine to come up
	I1002 11:54:29.772611  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:29.786551  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:29.807894  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:29.818837  384505 system_pods.go:59] 7 kube-system pods found
	I1002 11:54:29.818890  384505 system_pods.go:61] "coredns-5644d7b6d9-9xdpq" [2d10c772-e2f0-4bfc-9795-0721f8bab31c] Running
	I1002 11:54:29.818901  384505 system_pods.go:61] "etcd-old-k8s-version-749860" [5826895a-f14d-43ab-9f22-edad964d4a8e] Running
	I1002 11:54:29.818910  384505 system_pods.go:61] "kube-apiserver-old-k8s-version-749860" [3418ba32-aa28-4587-a231-b1f218181e71] Running
	I1002 11:54:29.818919  384505 system_pods.go:61] "kube-controller-manager-old-k8s-version-749860" [e42ff4c0-2ec4-45b9-8189-6a225c79f5c6] Running
	I1002 11:54:29.818927  384505 system_pods.go:61] "kube-proxy-gkhxb" [b3675678-e1cf-4d86-82d9-9e068bd1ba19] Running
	I1002 11:54:29.818939  384505 system_pods.go:61] "kube-scheduler-old-k8s-version-749860" [53a1c8a7-ec6d-4d47-a980-8cfab71ad467] Running
	I1002 11:54:29.818948  384505 system_pods.go:61] "storage-provisioner" [e73d6f24-1392-40ca-b37d-03c035734d1d] Running
	I1002 11:54:29.818964  384505 system_pods.go:74] duration metric: took 11.044895ms to wait for pod list to return data ...
	I1002 11:54:29.818980  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:29.822392  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:29.822455  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:29.822472  384505 node_conditions.go:105] duration metric: took 3.48317ms to run NodePressure ...
	I1002 11:54:29.822520  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:30.106960  384505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:30.111692  384505 retry.go:31] will retry after 218.727225ms: kubelet not initialised
	I1002 11:54:30.336456  384505 retry.go:31] will retry after 524.868139ms: kubelet not initialised
	I1002 11:54:30.867554  384505 retry.go:31] will retry after 427.897694ms: kubelet not initialised
	I1002 11:54:31.301616  384505 retry.go:31] will retry after 722.780158ms: kubelet not initialised
	I1002 11:54:32.029512  384505 retry.go:31] will retry after 1.205429819s: kubelet not initialised
	I1002 11:54:33.253735  384505 retry.go:31] will retry after 1.476521325s: kubelet not initialised
	I1002 11:54:32.989607  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.989718  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.004745  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.489141  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.489215  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.506018  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.990120  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.990217  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.005050  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.489520  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.489608  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.501965  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.989481  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.989584  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.002635  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.489123  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.489199  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.502995  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.989474  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.989565  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:36.003010  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:36.463582  384787 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:36.463614  384787 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:36.463628  384787 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:36.463689  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:36.503915  384787 cri.go:89] found id: ""
	I1002 11:54:36.503982  384787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:36.519603  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:36.529026  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:36.529086  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538424  384787 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538451  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:36.670492  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:34.910513  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911092  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:34.911030  385761 retry.go:31] will retry after 3.250805502s: waiting for machine to come up
	I1002 11:54:38.163585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Found IP for machine: 192.168.61.251
	I1002 11:54:38.164104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has current primary IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164124  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserving static IP address...
	I1002 11:54:38.164549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.164588  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | skip adding static IP to network mk-default-k8s-diff-port-777999 - found existing host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"}
	I1002 11:54:38.164604  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserved static IP address: 192.168.61.251
	I1002 11:54:38.164623  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for SSH to be available...
	I1002 11:54:38.164639  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Getting to WaitForSSH function...
	I1002 11:54:38.166901  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167279  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.167313  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167579  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH client type: external
	I1002 11:54:38.167610  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa (-rw-------)
	I1002 11:54:38.167649  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:38.167671  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | About to run SSH command:
	I1002 11:54:38.167694  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | exit 0
	I1002 11:54:38.274617  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:38.275081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetConfigRaw
	I1002 11:54:38.275836  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.278750  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279150  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.279193  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279391  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:54:38.279621  384965 machine.go:88] provisioning docker machine ...
	I1002 11:54:38.279646  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:38.279886  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280069  384965 buildroot.go:166] provisioning hostname "default-k8s-diff-port-777999"
	I1002 11:54:38.280094  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280253  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.282736  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.283136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283230  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.283399  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283578  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283733  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.283892  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.284295  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.284312  384965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-777999 && echo "default-k8s-diff-port-777999" | sudo tee /etc/hostname
	I1002 11:54:38.443082  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-777999
	
	I1002 11:54:38.443200  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.446493  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447061  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.447106  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447288  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.447549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447737  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447899  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.448132  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.448554  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.448586  384965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-777999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-777999/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-777999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:38.594884  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:38.594920  384965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:38.594956  384965 buildroot.go:174] setting up certificates
	I1002 11:54:38.594975  384965 provision.go:83] configureAuth start
	I1002 11:54:38.594993  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.595325  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.597718  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598053  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.598088  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598217  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.600751  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.601099  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601219  384965 provision.go:138] copyHostCerts
	I1002 11:54:38.601300  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:38.601316  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:38.601393  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:38.601520  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:38.601534  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:38.601565  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:38.601634  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:38.601644  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:38.601670  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:38.601728  384965 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-777999 san=[192.168.61.251 192.168.61.251 localhost 127.0.0.1 minikube default-k8s-diff-port-777999]
	I1002 11:54:38.706714  384965 provision.go:172] copyRemoteCerts
	I1002 11:54:38.706783  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:38.706847  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.709075  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709491  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.709547  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709658  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.709903  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.710087  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.710216  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:38.803103  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:38.825916  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:38.847881  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 11:54:38.873772  384965 provision.go:86] duration metric: configureAuth took 278.777931ms
	I1002 11:54:38.873804  384965 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:38.874066  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:38.874154  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.876864  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877269  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.877304  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877453  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.877666  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877797  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877936  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.878087  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.878441  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.878469  384965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:34.736594  384505 retry.go:31] will retry after 1.866771295s: kubelet not initialised
	I1002 11:54:36.609977  384505 retry.go:31] will retry after 4.83087592s: kubelet not initialised
	I1002 11:54:39.495298  384344 start.go:369] acquired machines lock for "no-preload-304121" in 55.626389891s
	I1002 11:54:39.495355  384344 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:39.495364  384344 fix.go:54] fixHost starting: 
	I1002 11:54:39.495800  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:39.495839  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:39.518491  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1002 11:54:39.518893  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:39.519407  384344 main.go:141] libmachine: Using API Version  1
	I1002 11:54:39.519432  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:39.519757  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:39.519941  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:39.520099  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:54:39.521857  384344 fix.go:102] recreateIfNeeded on no-preload-304121: state=Stopped err=<nil>
	I1002 11:54:39.521885  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	W1002 11:54:39.522058  384344 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:39.524119  384344 out.go:177] * Restarting existing kvm2 VM for "no-preload-304121" ...
	I1002 11:54:39.215761  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:39.215794  384965 machine.go:91] provisioned docker machine in 936.155542ms
	I1002 11:54:39.215807  384965 start.go:300] post-start starting for "default-k8s-diff-port-777999" (driver="kvm2")
	I1002 11:54:39.215822  384965 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:39.215848  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.216265  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:39.216305  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.219032  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219387  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.219418  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219542  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.219748  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.219910  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.220054  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.317075  384965 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:39.321405  384965 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:39.321429  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:39.321505  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:39.321599  384965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:39.321716  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:39.330980  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:39.357830  384965 start.go:303] post-start completed in 142.005546ms
	I1002 11:54:39.357863  384965 fix.go:56] fixHost completed within 20.350127508s
	I1002 11:54:39.357900  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.360232  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.360598  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360768  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.360966  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361139  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361264  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.361425  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:39.361918  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:39.361939  384965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:39.495129  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247679.435720520
	
	I1002 11:54:39.495155  384965 fix.go:206] guest clock: 1696247679.435720520
	I1002 11:54:39.495166  384965 fix.go:219] Guest: 2023-10-02 11:54:39.43572052 +0000 UTC Remote: 2023-10-02 11:54:39.357871423 +0000 UTC m=+265.343763085 (delta=77.849097ms)
	I1002 11:54:39.495194  384965 fix.go:190] guest clock delta is within tolerance: 77.849097ms
	I1002 11:54:39.495206  384965 start.go:83] releasing machines lock for "default-k8s-diff-port-777999", held for 20.487515438s
	I1002 11:54:39.495242  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.495652  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:39.498667  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499055  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.499114  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499370  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.499891  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500060  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500132  384965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:39.500199  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.500539  384965 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:39.500565  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.503388  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503580  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503885  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.503917  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503995  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504000  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.504081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.504281  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504297  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504682  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504680  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.504825  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.623582  384965 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:39.631181  384965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:39.787298  384965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:39.795202  384965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:39.795303  384965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:39.816471  384965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:39.816495  384965 start.go:469] detecting cgroup driver to use...
	I1002 11:54:39.816567  384965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:39.836594  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:39.852798  384965 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:39.852911  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:39.868676  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:39.885480  384965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:40.003441  384965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:40.146812  384965 docker.go:213] disabling docker service ...
	I1002 11:54:40.146916  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:40.163451  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:40.178327  384965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:40.339579  384965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:40.463502  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:40.476402  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:40.499021  384965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:40.499117  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.511680  384965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:40.511752  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.524364  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.536675  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.549326  384965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:40.559447  384965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:40.570086  384965 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:40.570157  384965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:40.582938  384965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:40.594250  384965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:40.739528  384965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:40.964248  384965 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:40.964336  384965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:40.969637  384965 start.go:537] Will wait 60s for crictl version
	I1002 11:54:40.969696  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:54:40.974270  384965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:41.016986  384965 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:41.017121  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.061313  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.112139  384965 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:39.525634  384344 main.go:141] libmachine: (no-preload-304121) Calling .Start
	I1002 11:54:39.525802  384344 main.go:141] libmachine: (no-preload-304121) Ensuring networks are active...
	I1002 11:54:39.526566  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network default is active
	I1002 11:54:39.526860  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network mk-no-preload-304121 is active
	I1002 11:54:39.527227  384344 main.go:141] libmachine: (no-preload-304121) Getting domain xml...
	I1002 11:54:39.527942  384344 main.go:141] libmachine: (no-preload-304121) Creating domain...
	I1002 11:54:40.973483  384344 main.go:141] libmachine: (no-preload-304121) Waiting to get IP...
	I1002 11:54:40.974731  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:40.975262  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:40.975359  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:40.975266  385933 retry.go:31] will retry after 231.149062ms: waiting for machine to come up
	I1002 11:54:41.207806  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.208486  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.208522  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.208461  385933 retry.go:31] will retry after 390.353931ms: waiting for machine to come up
	I1002 11:54:37.939830  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269286101s)
	I1002 11:54:37.939876  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.149675  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.246179  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.327794  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:38.327884  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.343240  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.855719  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.355428  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.854862  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.355228  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.855597  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.891530  384787 api_server.go:72] duration metric: took 2.563733499s to wait for apiserver process to appear ...
	I1002 11:54:40.891560  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:40.891581  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892226  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:40.892274  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892799  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:41.393747  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:41.113638  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:41.116930  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117360  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:41.117396  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117684  384965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:41.122622  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:41.138418  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:41.138496  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:41.189380  384965 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:41.189465  384965 ssh_runner.go:195] Run: which lz4
	I1002 11:54:41.194945  384965 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:41.200215  384965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:41.200254  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:43.164279  384965 crio.go:444] Took 1.969380 seconds to copy over tarball
	I1002 11:54:43.164370  384965 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:41.447247  384505 retry.go:31] will retry after 8.441231321s: kubelet not initialised
	I1002 11:54:41.600866  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.601691  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.601729  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.601345  385933 retry.go:31] will retry after 381.859851ms: waiting for machine to come up
	I1002 11:54:41.985107  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.986545  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.986572  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.986434  385933 retry.go:31] will retry after 606.51751ms: waiting for machine to come up
	I1002 11:54:42.594443  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:42.595004  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:42.595031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:42.594935  385933 retry.go:31] will retry after 474.689172ms: waiting for machine to come up
	I1002 11:54:43.071618  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:43.072140  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:43.072196  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:43.072085  385933 retry.go:31] will retry after 931.163736ms: waiting for machine to come up
	I1002 11:54:44.005228  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:44.005899  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:44.005927  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:44.005852  385933 retry.go:31] will retry after 1.133426769s: waiting for machine to come up
	I1002 11:54:45.141320  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:45.142068  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:45.142099  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:45.141965  385933 retry.go:31] will retry after 1.458717431s: waiting for machine to come up
	I1002 11:54:45.416658  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.416697  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.416713  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.489874  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.489918  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.893115  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.901437  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:45.901477  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.393114  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.399302  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:46.399337  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.892875  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.898524  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:54:46.908311  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:54:46.908342  384787 api_server.go:131] duration metric: took 6.016772427s to wait for apiserver health ...
	I1002 11:54:46.908354  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.908364  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:47.225292  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:47.481617  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:47.499011  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:47.535238  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:46.620757  384965 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.456345361s)
	I1002 11:54:46.620801  384965 crio.go:451] Took 3.456492 seconds to extract the tarball
	I1002 11:54:46.620814  384965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:46.677550  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:46.810235  384965 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:46.810265  384965 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:46.810334  384965 ssh_runner.go:195] Run: crio config
	I1002 11:54:46.875355  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.875378  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:46.875397  384965 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:46.875417  384965 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.251 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-777999 NodeName:default-k8s-diff-port-777999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:46.875588  384965 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.251
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-777999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:46.875674  384965 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-777999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 11:54:46.875737  384965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:46.886943  384965 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:46.887034  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:46.898434  384965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1002 11:54:46.917830  384965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:46.936297  384965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1002 11:54:46.954413  384965 ssh_runner.go:195] Run: grep 192.168.61.251	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:46.958832  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:46.970802  384965 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999 for IP: 192.168.61.251
	I1002 11:54:46.970845  384965 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:46.971031  384965 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:46.971093  384965 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:46.971194  384965 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/client.key
	I1002 11:54:46.971286  384965 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key.04d51ca9
	I1002 11:54:46.971341  384965 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key
	I1002 11:54:46.971469  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:46.971507  384965 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:46.971524  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:46.971572  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:46.971614  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:46.971652  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:46.971713  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:46.972319  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:46.998880  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:47.024639  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:47.048695  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:47.076815  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:47.102469  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:47.128913  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:47.155863  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:47.185058  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:47.212289  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:47.236848  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:47.261485  384965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:47.278535  384965 ssh_runner.go:195] Run: openssl version
	I1002 11:54:47.284888  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:47.296352  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301262  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301331  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.307136  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:47.317650  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:47.328371  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333341  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333421  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.339268  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:47.349646  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:47.360575  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367279  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367346  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.374693  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:47.386302  384965 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:47.391448  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:47.397407  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:47.403122  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:47.408810  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:47.414684  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:47.420606  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:47.426568  384965 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:47.426702  384965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:47.426747  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:47.467190  384965 cri.go:89] found id: ""
	I1002 11:54:47.467275  384965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:47.478921  384965 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:47.478944  384965 kubeadm.go:636] restartCluster start
	I1002 11:54:47.479016  384965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:47.492971  384965 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.494091  384965 kubeconfig.go:92] found "default-k8s-diff-port-777999" server: "https://192.168.61.251:8444"
	I1002 11:54:47.498738  384965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:47.510376  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.510454  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.523397  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.523417  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.523459  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.536893  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.037653  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.037746  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.055280  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.537887  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.537979  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.555759  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.037998  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.038108  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:46.602496  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:46.654672  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:46.654707  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:46.602962  385933 retry.go:31] will retry after 1.25268648s: waiting for machine to come up
	I1002 11:54:47.857506  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:47.858115  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:47.858149  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:47.858061  385933 retry.go:31] will retry after 2.104571101s: waiting for machine to come up
	I1002 11:54:49.964533  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:49.964997  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:49.965031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:49.964942  385933 retry.go:31] will retry after 2.047553587s: waiting for machine to come up
	I1002 11:54:47.766443  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:54:47.766485  384787 system_pods.go:61] "coredns-5dd5756b68-6glsj" [ad7c852a-cdac-4ada-99da-4115b447f00c] Running
	I1002 11:54:47.766498  384787 system_pods.go:61] "etcd-embed-certs-487027" [78f5c4ed-7baf-4339-811f-c25e934de0c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:54:47.766516  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [275bb65c-b955-43d9-839b-6439e8c19662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:54:47.766524  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [d798407e-abe2-4b70-952e-1274fff006bc] Running
	I1002 11:54:47.766532  384787 system_pods.go:61] "kube-proxy-wjjtv" [54e35e5e-7045-497f-8fef-322fe0e43afd] Running
	I1002 11:54:47.766543  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [62c61cf2-f18e-47a9-9729-20e87fe02c89] Running
	I1002 11:54:47.766556  384787 system_pods.go:61] "metrics-server-57f55c9bc5-d8c7b" [71c33b74-c942-403a-a1d4-2b852f0070a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:54:47.766568  384787 system_pods.go:61] "storage-provisioner" [0a8120e1-c879-4726-abab-f95a4a3c8721] Running
	I1002 11:54:47.766581  384787 system_pods.go:74] duration metric: took 231.314062ms to wait for pod list to return data ...
	I1002 11:54:47.766593  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:48.206673  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:48.206710  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:48.206722  384787 node_conditions.go:105] duration metric: took 440.12142ms to run NodePressure ...
	I1002 11:54:48.206743  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:48.736269  384787 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754061  384787 kubeadm.go:787] kubelet initialised
	I1002 11:54:48.754094  384787 kubeadm.go:788] duration metric: took 17.795803ms waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754106  384787 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:54:48.763480  384787 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:50.815900  384787 pod_ready.go:102] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:51.815729  384787 pod_ready.go:92] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:51.815752  384787 pod_ready.go:81] duration metric: took 3.052241738s waiting for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:51.815761  384787 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:49.055614  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.537412  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.537517  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:49.554838  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.037334  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.037460  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.050213  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.537454  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.537586  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.551733  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.037281  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.037394  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.055077  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.537591  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.537672  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.555315  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.037929  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.038038  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.052852  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.537358  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.537435  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.553169  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.037814  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.037913  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.055176  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.537764  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.537869  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.554864  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.037941  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.038052  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:49.895219  384505 retry.go:31] will retry after 9.020637322s: kubelet not initialised
	I1002 11:54:52.015240  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:52.015623  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:52.015646  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:52.015594  385933 retry.go:31] will retry after 3.361214112s: waiting for machine to come up
	I1002 11:54:55.378293  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:55.378805  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:55.378853  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:55.378772  385933 retry.go:31] will retry after 3.33521217s: waiting for machine to come up
	I1002 11:54:53.337930  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.337967  384787 pod_ready.go:81] duration metric: took 1.522199476s waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.337979  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344756  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.344782  384787 pod_ready.go:81] duration metric: took 6.79552ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344791  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:55.549698  384787 pod_ready.go:102] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:57.049146  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.049177  384787 pod_ready.go:81] duration metric: took 3.704379238s waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.049192  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055125  384787 pod_ready.go:92] pod "kube-proxy-wjjtv" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.055144  384787 pod_ready.go:81] duration metric: took 5.945156ms waiting for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055152  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:54.056234  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.537821  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.537918  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:54.552634  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.037141  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.037220  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.052963  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.537432  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.537531  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.552525  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.036986  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.037074  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.049750  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.537060  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.537144  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.548686  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.037931  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:57.038029  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:57.049828  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.511461  384965 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:57.511495  384965 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:57.511510  384965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:57.511571  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:57.552784  384965 cri.go:89] found id: ""
	I1002 11:54:57.552866  384965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:57.567867  384965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:57.578391  384965 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:57.578474  384965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587065  384965 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587086  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:57.717787  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.423038  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.607300  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.687023  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.778674  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:58.778770  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.794920  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.923574  384505 retry.go:31] will retry after 19.662203801s: kubelet not initialised
	I1002 11:54:58.715622  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716211  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has current primary IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716229  384344 main.go:141] libmachine: (no-preload-304121) Found IP for machine: 192.168.39.143
	I1002 11:54:58.716248  384344 main.go:141] libmachine: (no-preload-304121) Reserving static IP address...
	I1002 11:54:58.716781  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.716823  384344 main.go:141] libmachine: (no-preload-304121) Reserved static IP address: 192.168.39.143
	I1002 11:54:58.716845  384344 main.go:141] libmachine: (no-preload-304121) DBG | skip adding static IP to network mk-no-preload-304121 - found existing host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"}
	I1002 11:54:58.716864  384344 main.go:141] libmachine: (no-preload-304121) DBG | Getting to WaitForSSH function...
	I1002 11:54:58.716875  384344 main.go:141] libmachine: (no-preload-304121) Waiting for SSH to be available...
	I1002 11:54:58.719551  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.719991  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.720031  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.720236  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH client type: external
	I1002 11:54:58.720273  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa (-rw-------)
	I1002 11:54:58.720309  384344 main.go:141] libmachine: (no-preload-304121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:58.720329  384344 main.go:141] libmachine: (no-preload-304121) DBG | About to run SSH command:
	I1002 11:54:58.720355  384344 main.go:141] libmachine: (no-preload-304121) DBG | exit 0
	I1002 11:54:58.866583  384344 main.go:141] libmachine: (no-preload-304121) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:58.866916  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetConfigRaw
	I1002 11:54:58.867637  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:58.870844  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871270  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.871305  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871677  384344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:54:58.871886  384344 machine.go:88] provisioning docker machine ...
	I1002 11:54:58.871906  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:58.872159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872343  384344 buildroot.go:166] provisioning hostname "no-preload-304121"
	I1002 11:54:58.872370  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:58.875795  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876215  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.876252  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876420  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:58.876592  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876766  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876935  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:58.877113  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:58.877512  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:58.877528  384344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-304121 && echo "no-preload-304121" | sudo tee /etc/hostname
	I1002 11:54:59.032306  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-304121
	
	I1002 11:54:59.032336  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.035842  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036373  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.036412  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036749  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.036953  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037145  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037313  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.037564  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.038035  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.038064  384344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-304121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-304121/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-304121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:59.175880  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:59.175910  384344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:59.175933  384344 buildroot.go:174] setting up certificates
	I1002 11:54:59.175945  384344 provision.go:83] configureAuth start
	I1002 11:54:59.175957  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:59.176253  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:59.179169  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179541  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.179577  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179797  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.182011  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182418  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.182451  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182653  384344 provision.go:138] copyHostCerts
	I1002 11:54:59.182718  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:59.182732  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:59.182807  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:59.182919  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:59.182931  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:59.182963  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:59.183050  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:59.183060  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:59.183088  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:59.183174  384344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.no-preload-304121 san=[192.168.39.143 192.168.39.143 localhost 127.0.0.1 minikube no-preload-304121]
	I1002 11:54:59.492171  384344 provision.go:172] copyRemoteCerts
	I1002 11:54:59.492239  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:59.492266  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.495249  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495698  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.495746  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495900  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.496143  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.496299  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.496460  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:54:59.594538  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 11:54:59.625319  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:54:59.652745  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:59.676895  384344 provision.go:86] duration metric: configureAuth took 500.931279ms
	I1002 11:54:59.676930  384344 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:59.677160  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:59.677259  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.680393  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.680730  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.680764  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.681190  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.681491  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681698  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681875  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.682112  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.682651  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.682684  384344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:55:00.029184  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:55:00.029213  384344 machine.go:91] provisioned docker machine in 1.157312136s
	I1002 11:55:00.029226  384344 start.go:300] post-start starting for "no-preload-304121" (driver="kvm2")
	I1002 11:55:00.029240  384344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:55:00.029296  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.029683  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:55:00.029722  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.032977  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033456  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.033488  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033677  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.033919  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.034136  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.034351  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.137946  384344 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:55:00.144169  384344 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:55:00.144209  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:55:00.144291  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:55:00.144405  384344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:55:00.144609  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:55:00.157898  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:00.186547  384344 start.go:303] post-start completed in 157.300734ms
	I1002 11:55:00.186580  384344 fix.go:56] fixHost completed within 20.691216247s
	I1002 11:55:00.186609  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.189905  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190374  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.190411  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190718  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.190940  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.191494  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:55:00.191981  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:55:00.191996  384344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:55:00.328123  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247700.270150690
	
	I1002 11:55:00.328155  384344 fix.go:206] guest clock: 1696247700.270150690
	I1002 11:55:00.328166  384344 fix.go:219] Guest: 2023-10-02 11:55:00.27015069 +0000 UTC Remote: 2023-10-02 11:55:00.186584697 +0000 UTC m=+358.877281851 (delta=83.565993ms)
	I1002 11:55:00.328193  384344 fix.go:190] guest clock delta is within tolerance: 83.565993ms
	I1002 11:55:00.328207  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 20.832874678s
	I1002 11:55:00.328234  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.328584  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:00.331898  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332432  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.332468  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332651  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333263  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333480  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333586  384344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:55:00.333647  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.333895  384344 ssh_runner.go:195] Run: cat /version.json
	I1002 11:55:00.333943  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.336673  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.336920  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337021  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337083  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337207  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337399  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.337487  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337518  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.337642  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337734  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.337835  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.338131  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.338307  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.427708  384344 ssh_runner.go:195] Run: systemctl --version
	I1002 11:55:00.456367  384344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:55:00.604389  384344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:55:00.612859  384344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:55:00.612968  384344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:55:00.627986  384344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:55:00.628056  384344 start.go:469] detecting cgroup driver to use...
	I1002 11:55:00.628128  384344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:55:00.643670  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:55:00.656987  384344 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:55:00.657058  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:55:00.669708  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:55:00.682586  384344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:55:00.790044  384344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:55:00.913634  384344 docker.go:213] disabling docker service ...
	I1002 11:55:00.913717  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:55:00.926496  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:55:00.938769  384344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:55:01.045413  384344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:55:01.169133  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:55:01.182168  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:55:01.201850  384344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:55:01.201926  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.214874  384344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:55:01.214972  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.225123  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.237560  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.247898  384344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:55:01.260797  384344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:55:01.271528  384344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:55:01.271602  384344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:55:01.285906  384344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:55:01.297623  384344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:55:01.429828  384344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:55:01.617340  384344 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:55:01.617486  384344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:55:01.622871  384344 start.go:537] Will wait 60s for crictl version
	I1002 11:55:01.622942  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:01.627257  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:55:01.674032  384344 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:55:01.674130  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.726822  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.777433  384344 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:59.549254  384787 pod_ready.go:102] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:01.550493  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:01.550524  384787 pod_ready.go:81] duration metric: took 4.495364436s waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:01.550537  384787 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:59.310529  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:59.811582  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.310859  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.810518  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.311217  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.336761  384965 api_server.go:72] duration metric: took 2.55808678s to wait for apiserver process to appear ...
	I1002 11:55:01.336793  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:01.336814  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:01.778891  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:01.781741  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782048  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:01.782088  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782334  384344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:55:01.787047  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:01.803390  384344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:55:01.803482  384344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:55:01.853839  384344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:55:01.853868  384344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:55:01.853954  384344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.853966  384344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.854164  384344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.854189  384344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.854254  384344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.854169  384344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:01.854325  384344 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 11:55:01.854171  384344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855315  384344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855339  384344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.855355  384344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.855841  384344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.855856  384344 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.855815  384344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001299  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.002150  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1002 11:55:02.004275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.007591  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.028882  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.199630  384344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1002 11:55:02.199751  384344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.199678  384344 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1002 11:55:02.199838  384344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.199866  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199890  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199707  384344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1002 11:55:02.199951  384344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.199981  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305560  384344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1002 11:55:02.305618  384344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.305670  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305721  384344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1002 11:55:02.305784  384344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.305826  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305853  384344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1002 11:55:02.305893  384344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.305934  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305943  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.305999  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.306035  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.403560  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.403701  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1002 11:55:02.403791  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.403861  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.403983  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 11:55:02.404056  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:02.404148  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 11:55:02.404200  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:02.404274  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.512787  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 11:55:02.512909  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:02.513038  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1002 11:55:02.513062  384344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513091  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513169  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.2 (exists)
	I1002 11:55:02.513217  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 11:55:02.513258  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:02.513292  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1002 11:55:02.513343  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 11:55:02.513399  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:02.519549  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.2 (exists)
	I1002 11:55:02.529685  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.2 (exists)
	I1002 11:55:02.739233  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:03.573767  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:05.577137  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:07.577690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:06.191660  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.191697  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.191711  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.268234  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.268270  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.769081  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.775235  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:06.775267  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.268848  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.289255  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:07.289294  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.769010  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.776315  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:55:07.785543  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:07.785578  384965 api_server.go:131] duration metric: took 6.448776132s to wait for apiserver health ...
	I1002 11:55:07.785620  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:55:07.785630  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:07.963339  384965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:07.965036  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:08.003261  384965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:08.072023  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:08.084616  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:08.084657  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:08.084670  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:08.084680  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:08.084693  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:08.084709  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:08.084723  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:08.084737  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:08.084752  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:08.084767  384965 system_pods.go:74] duration metric: took 12.715919ms to wait for pod list to return data ...
	I1002 11:55:08.084783  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:08.089289  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:08.089323  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:08.089337  384965 node_conditions.go:105] duration metric: took 4.548285ms to run NodePressure ...
	I1002 11:55:08.089359  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:08.496528  384965 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509299  384965 kubeadm.go:787] kubelet initialised
	I1002 11:55:08.509331  384965 kubeadm.go:788] duration metric: took 12.771905ms waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509343  384965 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:08.516124  384965 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.528838  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.528938  384965 pod_ready.go:81] duration metric: took 12.780895ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.528967  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.529001  384965 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.534830  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534867  384965 pod_ready.go:81] duration metric: took 5.838075ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.534882  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534892  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.549854  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549885  384965 pod_ready.go:81] duration metric: took 14.983531ms waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.549900  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549913  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.559230  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559313  384965 pod_ready.go:81] duration metric: took 9.38728ms waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.559335  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559347  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.900163  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900190  384965 pod_ready.go:81] duration metric: took 340.83496ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.900199  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900208  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.516054  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516096  384965 pod_ready.go:81] duration metric: took 615.877294ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.516112  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516121  384965 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.701735  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701764  384965 pod_ready.go:81] duration metric: took 185.632721ms waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.701775  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701782  384965 pod_ready.go:38] duration metric: took 1.192428133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:09.701800  384965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:55:09.715441  384965 ops.go:34] apiserver oom_adj: -16
	I1002 11:55:09.715471  384965 kubeadm.go:640] restartCluster took 22.236518554s
	I1002 11:55:09.715483  384965 kubeadm.go:406] StartCluster complete in 22.288924118s
	I1002 11:55:09.715506  384965 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.715603  384965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:55:09.717604  384965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.832925  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:55:09.832958  384965 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:55:09.833045  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:55:09.833070  384965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833078  384965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833081  384965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833097  384965 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.833106  384965 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:55:09.833106  384965 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:09.833108  384965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-777999"
	W1002 11:55:09.833125  384965 addons.go:240] addon metrics-server should already be in state true
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833570  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833592  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833615  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833624  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833634  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833646  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.839134  384965 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-777999" context rescaled to 1 replicas
	I1002 11:55:09.839204  384965 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:55:09.882782  384965 out.go:177] * Verifying Kubernetes components...
	I1002 11:55:09.852478  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1002 11:55:09.853164  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1002 11:55:09.853212  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1002 11:55:09.884413  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:55:09.884847  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884862  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884978  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.885450  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885473  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885590  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885616  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885875  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885905  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.885931  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885991  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886291  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.886608  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886609  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886643  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.886650  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.890816  384965 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.890840  384965 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:55:09.890874  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.891346  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.891381  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.905399  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1002 11:55:09.905472  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1002 11:55:09.905949  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906013  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906516  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906548  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.906616  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906638  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.907044  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907050  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907204  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907296  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907802  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1002 11:55:09.908797  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.909184  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.911200  384965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:55:09.909554  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.909557  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.913028  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.913040  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:55:09.913097  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:55:09.913128  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.914961  384965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102329  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.589219551s)
	I1002 11:55:10.102369  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1002 11:55:10.102405  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102437  384344 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (7.58915139s)
	I1002 11:55:10.102467  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.2 (exists)
	I1002 11:55:10.102468  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102517  384344 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (7.363200276s)
	I1002 11:55:10.102554  384344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 11:55:10.102587  384344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102639  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:10.107376  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:09.913417  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.916644  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.916734  384965 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:09.916751  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:55:09.916773  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.917177  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.917217  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.917938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.917968  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.918238  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.918494  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.918725  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.919087  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.920001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920470  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.920499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920702  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.920898  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.921037  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.921164  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.936676  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1002 11:55:09.937243  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.937814  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.937838  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.938269  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.938503  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.940662  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.940930  384965 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:09.940952  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:55:09.940975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.944168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.944929  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.944938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.944972  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.945129  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.945323  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.945464  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:10.048027  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:10.064428  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:55:10.064457  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:55:10.113892  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:55:10.113922  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:55:10.162803  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:10.203352  384965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:10.203377  384965 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:55:10.209916  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:10.209945  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:55:10.283168  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:11.838556  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.790470973s)
	I1002 11:55:11.838584  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.675739061s)
	I1002 11:55:11.838618  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838620  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838659  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838635  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838886  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555664753s)
	I1002 11:55:11.838941  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838954  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.838980  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.838992  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838961  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839104  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839139  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839157  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839170  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839303  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839369  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.839409  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839421  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839431  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839688  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839700  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839710  384965 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:11.841889  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.841915  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.842201  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842253  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.842259  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842269  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.849511  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.849529  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.849874  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.849878  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.849901  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.853656  384965 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1002 11:55:10.075236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:12.576161  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:11.855303  384965 addons.go:502] enable addons completed in 2.022363817s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1002 11:55:12.217572  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:12.931492  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2: (2.828987001s)
	I1002 11:55:12.931534  384344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.824127868s)
	I1002 11:55:12.931594  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:55:12.931539  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 from cache
	I1002 11:55:12.931660  384344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931718  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931728  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:12.939018  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1002 11:55:14.293770  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362024408s)
	I1002 11:55:14.293812  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1002 11:55:14.293844  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:14.293919  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:15.843943  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2: (1.549996136s)
	I1002 11:55:15.843970  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 from cache
	I1002 11:55:15.843995  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.844044  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.077109  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:17.575669  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:14.219000  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:16.717611  384965 node_ready.go:49] node "default-k8s-diff-port-777999" has status "Ready":"True"
	I1002 11:55:16.717639  384965 node_ready.go:38] duration metric: took 6.514250616s waiting for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:16.717652  384965 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:16.724331  384965 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242058  384965 pod_ready.go:92] pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.242084  384965 pod_ready.go:81] duration metric: took 517.728305ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242093  384965 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247916  384965 pod_ready.go:92] pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.247946  384965 pod_ready.go:81] duration metric: took 5.844733ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247960  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.596133  384505 kubeadm.go:787] kubelet initialised
	I1002 11:55:18.596163  384505 kubeadm.go:788] duration metric: took 48.489169583s waiting for restarted kubelet to initialise ...
	I1002 11:55:18.596173  384505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:18.603606  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612080  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.612112  384505 pod_ready.go:81] duration metric: took 8.472159ms waiting for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612124  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618116  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.618147  384505 pod_ready.go:81] duration metric: took 6.014635ms waiting for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618159  384505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624120  384505 pod_ready.go:92] pod "etcd-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.624148  384505 pod_ready.go:81] duration metric: took 5.979959ms waiting for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624162  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631373  384505 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.631404  384505 pod_ready.go:81] duration metric: took 7.233318ms waiting for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631418  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990560  384505 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.990593  384505 pod_ready.go:81] duration metric: took 359.165649ms waiting for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990608  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.708531  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2: (1.864455947s)
	I1002 11:55:17.708567  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 from cache
	I1002 11:55:17.708616  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:17.708669  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:20.492385  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2: (2.783683562s)
	I1002 11:55:20.492427  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 from cache
	I1002 11:55:20.492455  384344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:20.492508  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:19.575875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:22.075666  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.526494  384965 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.526525  384965 pod_ready.go:81] duration metric: took 2.278556042s waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.526542  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927586  384965 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:20.927626  384965 pod_ready.go:81] duration metric: took 1.401074339s waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927641  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117907  384965 pod_ready.go:92] pod "kube-proxy-gchnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.117943  384965 pod_ready.go:81] duration metric: took 190.292051ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117957  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517768  384965 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.517788  384965 pod_ready.go:81] duration metric: took 399.822591ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517800  384965 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:23.829704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.390560  384505 pod_ready.go:92] pod "kube-proxy-gkhxb" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.390588  384505 pod_ready.go:81] duration metric: took 399.970888ms waiting for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.390602  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791405  384505 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.791443  384505 pod_ready.go:81] duration metric: took 400.826662ms waiting for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791458  384505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:22.098383  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:24.098434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:21.439323  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 11:55:21.439378  384344 cache_images.go:123] Successfully loaded all cached images
	I1002 11:55:21.439386  384344 cache_images.go:92] LoadImages completed in 19.585504619s
	I1002 11:55:21.439504  384344 ssh_runner.go:195] Run: crio config
	I1002 11:55:21.510657  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:21.510683  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:21.510703  384344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:55:21.510734  384344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-304121 NodeName:no-preload-304121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:55:21.511445  384344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-304121"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:55:21.511576  384344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-304121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:55:21.511643  384344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:55:21.522719  384344 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:55:21.522788  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:55:21.531557  384344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 11:55:21.548551  384344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:55:21.565791  384344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1002 11:55:21.583240  384344 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1002 11:55:21.587268  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:21.600487  384344 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121 for IP: 192.168.39.143
	I1002 11:55:21.600520  384344 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:21.600663  384344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:55:21.600697  384344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:55:21.600794  384344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/client.key
	I1002 11:55:21.600873  384344 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key.62e94479
	I1002 11:55:21.600926  384344 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key
	I1002 11:55:21.601033  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:55:21.601061  384344 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:55:21.601071  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:55:21.601093  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:55:21.601118  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:55:21.601146  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:55:21.601182  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:21.601818  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:55:21.626860  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:55:21.650402  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:55:21.678876  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:55:21.704351  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:55:21.729385  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:55:21.755185  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:55:21.779149  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:55:21.802775  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:55:21.825691  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:55:21.849575  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:55:21.872777  384344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:55:21.890629  384344 ssh_runner.go:195] Run: openssl version
	I1002 11:55:21.896382  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:55:21.906415  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911134  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911202  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.916782  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:55:21.926770  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:55:21.936394  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940874  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940944  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.946542  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:55:21.956590  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:55:21.966128  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971092  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971144  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.976625  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:55:21.987142  384344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:55:21.991548  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:55:21.998311  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:55:22.004302  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:55:22.010267  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:55:22.016280  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:55:22.022273  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:55:22.027921  384344 kubeadm.go:404] StartCluster: {Name:no-preload-304121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:55:22.028050  384344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:55:22.028141  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:22.068066  384344 cri.go:89] found id: ""
	I1002 11:55:22.068147  384344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:55:22.079381  384344 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:55:22.079406  384344 kubeadm.go:636] restartCluster start
	I1002 11:55:22.079471  384344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:55:22.088977  384344 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.090087  384344 kubeconfig.go:92] found "no-preload-304121" server: "https://192.168.39.143:8443"
	I1002 11:55:22.093401  384344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:55:22.103315  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.103378  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.114520  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.114538  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.114586  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.126040  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.626326  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.626438  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.637215  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.126863  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.126967  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.138035  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.626453  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.639113  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.126445  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.126541  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.139561  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.626423  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.626534  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.638442  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.127011  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.127103  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.139945  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.626451  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.638919  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:26.126459  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.126551  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.140068  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.574146  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.574656  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.329321  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.329400  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.098690  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.098837  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.626344  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.626445  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.641274  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.126886  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.126965  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.139451  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.627110  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.627264  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.640675  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.126212  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.126301  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.140048  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.626433  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.626530  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.639683  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.127030  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.127142  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.139681  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.626803  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.626878  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.639468  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.127126  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.127231  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.140930  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.626441  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.626535  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.639070  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:31.126421  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.126503  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.138724  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.074607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.830079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.832350  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.099074  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.596870  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.627189  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.627281  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.640362  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:32.104121  384344 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:55:32.104153  384344 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:55:32.104169  384344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:55:32.104223  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:32.147672  384344 cri.go:89] found id: ""
	I1002 11:55:32.147756  384344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:55:32.164049  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:55:32.174941  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:55:32.175041  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185756  384344 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185783  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:32.328093  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.120678  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.341378  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.433591  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.518381  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:55:33.518458  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:33.530334  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.043021  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.542602  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.042825  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.542484  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.042547  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.067551  384344 api_server.go:72] duration metric: took 2.549193903s to wait for apiserver process to appear ...
	I1002 11:55:36.067574  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:36.067593  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:33.076598  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.077561  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.575927  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.328950  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.330925  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:34.598649  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:36.598851  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.099902  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:40.195285  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.195318  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.195330  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.261287  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.261324  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.762016  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.776249  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:40.776279  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.262027  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.277940  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:41.277971  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.762404  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.767751  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 11:55:41.775963  384344 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:41.775988  384344 api_server.go:131] duration metric: took 5.708406738s to wait for apiserver health ...
	I1002 11:55:41.775997  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:41.776003  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:41.777791  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:40.076215  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.574607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.831982  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.330541  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.599812  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.097139  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.779495  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:41.796340  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:41.838383  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:41.863561  384344 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:41.863600  384344 system_pods.go:61] "coredns-5dd5756b68-hn8bw" [f388b655-7f90-436d-a1fd-458f22c7f5e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:41.863612  384344 system_pods.go:61] "etcd-no-preload-304121" [b45507da-d57a-45f5-82a3-37b273c42747] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:41.863621  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [7f8cdde0-5050-4cea-87c5-56bd0a5d623b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:41.863630  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [24d40a92-d549-48c8-bf5f-983fdc15dcae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:41.863641  384344 system_pods.go:61] "kube-proxy-cwvr7" [9e3f08e6-92ad-4ebc-afe3-44d5ab81a63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:41.863651  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [cc3c6828-f829-416a-9cfd-ddcc0f485578] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:41.863665  384344 system_pods.go:61] "metrics-server-57f55c9bc5-lrqt9" [7b70c72d-06b3-40ae-8e0c-ea4794cfe47b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:41.863682  384344 system_pods.go:61] "storage-provisioner" [457608a4-5ba9-45d2-841e-889930ce6bd7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:41.863694  384344 system_pods.go:74] duration metric: took 25.279676ms to wait for pod list to return data ...
	I1002 11:55:41.863707  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:41.870534  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:41.870580  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:41.870636  384344 node_conditions.go:105] duration metric: took 6.921999ms to run NodePressure ...
	I1002 11:55:41.870666  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:42.164858  384344 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169831  384344 kubeadm.go:787] kubelet initialised
	I1002 11:55:42.169855  384344 kubeadm.go:788] duration metric: took 4.969744ms waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169864  384344 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:42.176338  384344 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.195428  384344 pod_ready.go:102] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.195763  384344 pod_ready.go:92] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:46.195786  384344 pod_ready.go:81] duration metric: took 4.019424872s waiting for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:46.195795  384344 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.581249  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:47.074875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.331120  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.833248  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.099661  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.599051  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.217529  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:50.218641  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.575639  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.074550  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.329627  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.330613  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.330666  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.098233  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.098464  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.717990  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.716716  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:53.716751  384344 pod_ready.go:81] duration metric: took 7.520948071s waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:53.716769  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738808  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.738832  384344 pod_ready.go:81] duration metric: took 1.022054915s waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738841  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.743979  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.743997  384344 pod_ready.go:81] duration metric: took 5.14952ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.744006  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749813  384344 pod_ready.go:92] pod "kube-proxy-cwvr7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.749843  384344 pod_ready.go:81] duration metric: took 5.828956ms waiting for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749855  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913811  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.913840  384344 pod_ready.go:81] duration metric: took 163.97545ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913853  384344 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.075263  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:56.574518  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.829643  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:58.328816  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.597512  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.598176  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.221008  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.221092  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.221270  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.075344  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.576898  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:00.330184  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.332041  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.599606  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.098251  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.098441  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.222251  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:05.721050  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.577043  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.075021  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.829434  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.830586  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.830689  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.100229  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.597399  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:07.725911  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.222275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.574907  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:11.075011  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.831040  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.330226  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.599336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.601338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.721538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:14.732864  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.075225  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.575267  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.831410  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.328821  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.098085  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.598406  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.220843  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:19.221812  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.074885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.575220  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.830090  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.329239  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.108397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:22.597329  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:21.723316  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.220817  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:26.222858  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.075276  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.574332  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.574872  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.330095  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.831991  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.598737  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.098098  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:28.721424  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.721466  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.074535  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.075748  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.330155  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.830009  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:29.597397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:31.598389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.598490  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.223521  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.719548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:34.575020  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.074654  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.331567  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.832286  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.598829  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.599403  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.722451  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.223547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:39.075433  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:41.575885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.329838  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.330038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.099862  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.598269  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.723887  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.221944  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.075128  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.075540  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.331960  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.829987  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.097469  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.098616  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.222108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.721938  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:48.589935  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.074993  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.331749  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.830280  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.830731  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.598433  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.097486  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.098228  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.222646  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.726547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.076322  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:55.575236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.329005  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.330077  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.598418  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.098019  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:57.221753  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.721824  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.074481  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.576860  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.831342  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.328695  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:01.598124  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.098241  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:02.221634  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.222422  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.075152  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.076964  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.577621  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.328811  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.329223  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.598041  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.097384  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.724181  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.221108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.223407  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:10.077910  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:12.574292  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.331559  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.828655  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.829065  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.098632  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.099363  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.721785  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.222201  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:14.574467  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.576124  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.829618  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:17.830298  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.598739  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.097854  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.722947  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.220868  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:19.074608  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.079563  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.329680  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.335299  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.109847  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.598994  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.221458  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.222249  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.575662  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.075111  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:24.829500  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.830678  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.099426  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.598577  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.721159  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.725949  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:28.574416  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.576031  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.330079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:31.330829  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.829243  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.098615  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.598161  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.220933  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.720190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.075330  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.075824  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.574487  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.829585  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:38.333997  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.598838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.098682  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:36.723779  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.222751  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.074293  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:42.574665  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.829324  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.329265  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.598047  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.598338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:44.097421  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.720538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.721398  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.220972  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.074832  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.573962  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.330175  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.829115  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.097496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.098108  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.221977  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.222810  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.576755  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.076442  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.829764  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.330051  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.099771  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.599534  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.223223  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.721544  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.574341  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.574466  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.829215  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.829468  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.829730  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:55.097141  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.598230  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.221854  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.721190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.830156  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.329206  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.599838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:02.097630  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.099434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:01.724512  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.223282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.076896  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.576101  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.330313  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:07.830038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.597389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.098677  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.721370  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.723225  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.224608  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.076078  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:10.574982  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.575115  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.832412  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.330220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.597760  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.598933  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.726487  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.220404  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.575310  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.576156  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.330536  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.829762  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.833076  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.099600  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.599713  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.222118  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:20.722548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:19.076690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.575073  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.330604  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.829742  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.099777  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.598614  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.220183  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.221895  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.575355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.575510  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.830538  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.329783  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:26.097290  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.097568  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:27.722661  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.221305  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.074457  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.074944  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.075905  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.831228  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:33.328903  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.098502  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.599120  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.221445  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.224133  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.075953  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.574997  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.330632  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.830117  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.101830  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.597886  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.722453  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:38.722619  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.725507  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.077321  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.574812  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.329004  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:42.329704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.598243  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.600336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.098496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.225247  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:45.721116  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.073774  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.830119  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.330229  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.101053  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.597255  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.724301  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.220275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.074634  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.075498  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.576147  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:49.829149  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.328994  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.598113  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:53.096876  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.224282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.721074  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.576355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.074445  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.330474  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.331220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.829693  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:55.098655  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.598659  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.721698  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.721958  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.222685  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:59.074760  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.076178  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.551409  384787 pod_ready.go:81] duration metric: took 4m0.000833874s waiting for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:01.551453  384787 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:01.551481  384787 pod_ready.go:38] duration metric: took 4m12.797362192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:01.551549  384787 kubeadm.go:640] restartCluster took 4m35.116019688s
	W1002 11:59:01.551687  384787 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:01.551757  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:00.830381  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.830963  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:00.103080  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.600662  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:03.720777  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.722315  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.330034  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.835944  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.098121  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.098246  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:09.099171  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.725245  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.221073  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.328885  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:12.331198  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:11.599122  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.099609  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.268063  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.716271748s)
	I1002 11:59:15.268160  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:15.282632  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:15.294231  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:15.305847  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:15.305892  384787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:59:15.365627  384787 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:59:15.365703  384787 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:15.546049  384787 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:15.546175  384787 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:15.546300  384787 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:15.810889  384787 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:12.221147  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.222293  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.223901  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.813908  384787 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:15.814079  384787 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:15.814178  384787 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:15.814257  384787 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:15.814309  384787 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:15.814451  384787 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:15.814528  384787 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:15.814874  384787 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:15.815489  384787 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:15.816067  384787 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:15.816586  384787 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:15.817099  384787 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:15.817161  384787 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:15.988485  384787 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:16.038665  384787 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:16.218038  384787 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:16.415133  384787 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:16.415531  384787 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:16.418000  384787 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:16.420952  384787 out.go:204]   - Booting up control plane ...
	I1002 11:59:16.421147  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:16.421273  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:16.423255  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:16.442699  384787 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:16.443964  384787 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:16.444055  384787 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:59:16.602169  384787 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:14.331978  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.830188  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.831449  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.597731  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.598683  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.722865  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.222671  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.329396  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.518315  384965 pod_ready.go:81] duration metric: took 4m0.000482629s waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:21.518363  384965 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:21.518378  384965 pod_ready.go:38] duration metric: took 4m4.800712941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:21.518406  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:21.518451  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:21.518519  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:21.587182  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:21.587210  384965 cri.go:89] found id: ""
	I1002 11:59:21.587221  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:21.587285  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.592996  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:21.593072  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:21.635267  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:21.635293  384965 cri.go:89] found id: ""
	I1002 11:59:21.635306  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:21.635367  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.640347  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:21.640428  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:21.686113  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:21.686146  384965 cri.go:89] found id: ""
	I1002 11:59:21.686157  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:21.686224  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.691867  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:21.691959  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:21.745210  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:21.745245  384965 cri.go:89] found id: ""
	I1002 11:59:21.745257  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:21.745330  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.750774  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:21.750862  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:21.810054  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:21.810084  384965 cri.go:89] found id: ""
	I1002 11:59:21.810099  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:21.810161  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.815433  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:21.815518  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:21.858759  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:21.858794  384965 cri.go:89] found id: ""
	I1002 11:59:21.858807  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:21.858887  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.864818  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:21.864900  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:21.920312  384965 cri.go:89] found id: ""
	I1002 11:59:21.920343  384965 logs.go:284] 0 containers: []
	W1002 11:59:21.920353  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:21.920362  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:21.920429  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:21.964677  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:21.964708  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:21.964715  384965 cri.go:89] found id: ""
	I1002 11:59:21.964724  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:21.964812  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.970514  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.976118  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:21.976158  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:22.026289  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:22.026337  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:22.094330  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:22.094389  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:22.133879  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:22.133911  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:22.186645  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:22.186688  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:22.200091  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:22.200132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:22.245383  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:22.245420  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:22.312167  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:22.312212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:22.358596  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:22.358631  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:22.417643  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:22.417695  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:22.467793  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:22.467830  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:22.509173  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:22.509216  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:23.037502  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:23.037554  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:19.792274  384505 pod_ready.go:81] duration metric: took 4m0.000796599s waiting for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:19.792309  384505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:19.792337  384505 pod_ready.go:38] duration metric: took 4m1.196150969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:19.792389  384505 kubeadm.go:640] restartCluster took 5m11.202020009s
	W1002 11:59:19.792478  384505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:19.792509  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:24.926525  384505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.133982838s)
	I1002 11:59:24.926616  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:24.943054  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:24.953201  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:24.963105  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:24.963158  384505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 11:59:25.027860  384505 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1002 11:59:25.027986  384505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:25.214224  384505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:25.214399  384505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:25.214529  384505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:25.472019  384505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:25.472706  384505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:25.481965  384505 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1002 11:59:25.630265  384505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:25.105120  384787 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502545 seconds
	I1002 11:59:25.105321  384787 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:25.124191  384787 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:25.659886  384787 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:25.660110  384787 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-487027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:59:26.180742  384787 kubeadm.go:322] [bootstrap-token] Using token: tg9u90.7q86afgrs7pieyop
	I1002 11:59:23.723485  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:25.724673  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:26.182574  384787 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:26.182738  384787 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:26.190559  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:59:26.200659  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:26.212391  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:26.217946  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:26.226534  384787 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:26.248000  384787 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:59:26.545226  384787 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:26.604475  384787 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:26.605636  384787 kubeadm.go:322] 
	I1002 11:59:26.605726  384787 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:26.605738  384787 kubeadm.go:322] 
	I1002 11:59:26.605810  384787 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:26.605815  384787 kubeadm.go:322] 
	I1002 11:59:26.605844  384787 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:26.605914  384787 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:26.605973  384787 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:26.605981  384787 kubeadm.go:322] 
	I1002 11:59:26.606052  384787 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:59:26.606058  384787 kubeadm.go:322] 
	I1002 11:59:26.606097  384787 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:59:26.606101  384787 kubeadm.go:322] 
	I1002 11:59:26.606143  384787 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:26.606203  384787 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:26.606263  384787 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:26.606267  384787 kubeadm.go:322] 
	I1002 11:59:26.606334  384787 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:59:26.606438  384787 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:26.606446  384787 kubeadm.go:322] 
	I1002 11:59:26.606580  384787 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.606732  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:26.606764  384787 kubeadm.go:322] 	--control-plane 
	I1002 11:59:26.606773  384787 kubeadm.go:322] 
	I1002 11:59:26.606906  384787 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:26.606919  384787 kubeadm.go:322] 
	I1002 11:59:26.607066  384787 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.607192  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:26.608470  384787 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:26.608503  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:59:26.608547  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:26.610426  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:25.632074  384505 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:25.632197  384505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:25.632294  384505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:25.632398  384505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:25.632546  384505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:25.632693  384505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:25.633319  384505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:25.633417  384505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:25.633720  384505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:25.634302  384505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:25.635341  384505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:25.635391  384505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:25.635461  384505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:25.743684  384505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:25.940709  384505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:26.418951  384505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:26.676172  384505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:26.677698  384505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:26.612002  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:26.646809  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:26.709486  384787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:26.709648  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.709720  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=embed-certs-487027 minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.778472  384787 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:27.199359  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:27.351099  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:25.716079  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:25.739754  384965 api_server.go:72] duration metric: took 4m15.900505961s to wait for apiserver process to appear ...
	I1002 11:59:25.739785  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:25.739834  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:25.739904  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:25.788719  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:25.788747  384965 cri.go:89] found id: ""
	I1002 11:59:25.788758  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:25.788824  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.794426  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:25.794500  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:25.836689  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:25.836721  384965 cri.go:89] found id: ""
	I1002 11:59:25.836731  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:25.836808  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.841671  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:25.841744  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:25.883947  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:25.883976  384965 cri.go:89] found id: ""
	I1002 11:59:25.883986  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:25.884049  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.892631  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:25.892758  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:25.966469  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:25.966502  384965 cri.go:89] found id: ""
	I1002 11:59:25.966514  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:25.966575  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.971814  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:25.971890  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:26.020970  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.021002  384965 cri.go:89] found id: ""
	I1002 11:59:26.021013  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:26.021076  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.025582  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:26.025657  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:26.077339  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.077371  384965 cri.go:89] found id: ""
	I1002 11:59:26.077383  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:26.077448  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.082311  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:26.082396  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:26.126803  384965 cri.go:89] found id: ""
	I1002 11:59:26.126833  384965 logs.go:284] 0 containers: []
	W1002 11:59:26.126843  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:26.126851  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:26.126992  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:26.176829  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.176858  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.176866  384965 cri.go:89] found id: ""
	I1002 11:59:26.176876  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:26.176945  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.182892  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.189288  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:26.189316  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.257856  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:26.257910  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.297691  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:26.297747  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:26.351211  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:26.351254  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:26.425373  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:26.425416  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:26.568944  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:26.568985  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.627406  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:26.627449  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:26.641249  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:26.641281  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:26.696939  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:26.696974  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.744365  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:26.744406  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:27.279579  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:27.279639  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:27.366447  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:27.366508  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:27.436429  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:27.436476  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:26.679464  384505 out.go:204]   - Booting up control plane ...
	I1002 11:59:26.679594  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:26.688060  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:26.700892  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:26.702245  384505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:26.706277  384505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:28.222692  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:30.223561  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:27.973079  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.472938  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.973900  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.473650  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.972984  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.473216  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.973931  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.474026  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.973024  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:32.473723  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.989828  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:59:29.995664  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:59:29.998819  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:29.998846  384965 api_server.go:131] duration metric: took 4.25905343s to wait for apiserver health ...
	I1002 11:59:29.998855  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:29.998882  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:29.998944  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:30.037898  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.037925  384965 cri.go:89] found id: ""
	I1002 11:59:30.037935  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:30.038014  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.042751  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:30.042835  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:30.085339  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.085378  384965 cri.go:89] found id: ""
	I1002 11:59:30.085390  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:30.085463  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.090184  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:30.090265  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:30.130574  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.130602  384965 cri.go:89] found id: ""
	I1002 11:59:30.130611  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:30.130665  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.135040  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:30.135125  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:30.178044  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:30.178067  384965 cri.go:89] found id: ""
	I1002 11:59:30.178078  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:30.178144  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.182586  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:30.182662  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:30.226121  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:30.226142  384965 cri.go:89] found id: ""
	I1002 11:59:30.226152  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:30.226209  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.231080  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:30.231156  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:30.275499  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.275533  384965 cri.go:89] found id: ""
	I1002 11:59:30.275545  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:30.275611  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.281023  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:30.281089  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:30.325580  384965 cri.go:89] found id: ""
	I1002 11:59:30.325610  384965 logs.go:284] 0 containers: []
	W1002 11:59:30.325622  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:30.325630  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:30.325691  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:30.372727  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.372760  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.372766  384965 cri.go:89] found id: ""
	I1002 11:59:30.372776  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:30.372838  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.377541  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.382371  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:30.382403  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:30.449081  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:30.449132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.519339  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:30.519392  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.566205  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:30.566250  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.607933  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:30.607973  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:30.655904  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:30.655946  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.717563  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:30.717619  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.779216  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:30.779268  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.822075  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:30.822114  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:31.180609  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:31.180664  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:31.196239  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:31.196274  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:31.345274  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:31.345318  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:31.392175  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:31.392212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:33.946599  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:33.946635  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.946643  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.946650  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.946656  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.946659  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.946664  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.946677  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.946687  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.946704  384965 system_pods.go:74] duration metric: took 3.947840874s to wait for pod list to return data ...
	I1002 11:59:33.946715  384965 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:33.950028  384965 default_sa.go:45] found service account: "default"
	I1002 11:59:33.950059  384965 default_sa.go:55] duration metric: took 3.333093ms for default service account to be created ...
	I1002 11:59:33.950069  384965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:33.956623  384965 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:33.956651  384965 system_pods.go:89] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.956657  384965 system_pods.go:89] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.956662  384965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.956666  384965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.956670  384965 system_pods.go:89] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.956674  384965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.956681  384965 system_pods.go:89] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.956686  384965 system_pods.go:89] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.956694  384965 system_pods.go:126] duration metric: took 6.618721ms to wait for k8s-apps to be running ...
	I1002 11:59:33.956704  384965 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:33.956749  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:33.976674  384965 system_svc.go:56] duration metric: took 19.952308ms WaitForService to wait for kubelet.
	I1002 11:59:33.976710  384965 kubeadm.go:581] duration metric: took 4m24.137472355s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:33.976750  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:33.982173  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:33.982211  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:33.982227  384965 node_conditions.go:105] duration metric: took 5.470843ms to run NodePressure ...
	I1002 11:59:33.982242  384965 start.go:228] waiting for startup goroutines ...
	I1002 11:59:33.982251  384965 start.go:233] waiting for cluster config update ...
	I1002 11:59:33.982303  384965 start.go:242] writing updated cluster config ...
	I1002 11:59:33.982687  384965 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:34.039684  384965 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:34.041739  384965 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-777999" cluster and "default" namespace by default
	I1002 11:59:32.723475  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:35.221523  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:32.973400  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.473644  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.973820  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.473607  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.973848  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.473328  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.973485  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.473888  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.973837  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.473514  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.973633  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.094807  384787 kubeadm.go:1081] duration metric: took 11.38520709s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:38.094846  384787 kubeadm.go:406] StartCluster complete in 5m11.722637512s
	I1002 11:59:38.094872  384787 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.094972  384787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:38.097201  384787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.097495  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:38.097829  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:59:38.097966  384787 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:38.098056  384787 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-487027"
	I1002 11:59:38.098079  384787 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-487027"
	I1002 11:59:38.098083  384787 addons.go:69] Setting default-storageclass=true in profile "embed-certs-487027"
	I1002 11:59:38.098098  384787 addons.go:69] Setting metrics-server=true in profile "embed-certs-487027"
	I1002 11:59:38.098110  384787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-487027"
	I1002 11:59:38.098113  384787 addons.go:231] Setting addon metrics-server=true in "embed-certs-487027"
	W1002 11:59:38.098125  384787 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:38.098177  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098608  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098643  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098647  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1002 11:59:38.098092  384787 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:38.098827  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098670  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.099207  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.099235  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.118215  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I1002 11:59:38.118691  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.119232  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.119260  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.119649  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.120147  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.120182  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.129398  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1002 11:59:38.129652  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1002 11:59:38.130092  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.130723  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.130746  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.131301  384787 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-487027" context rescaled to 1 replicas
	I1002 11:59:38.131342  384787 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:38.133196  384787 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:38.134675  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:38.132825  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.134964  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.135242  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.135408  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.135434  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.135834  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.136413  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.136455  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.138974  384787 addons.go:231] Setting addon default-storageclass=true in "embed-certs-487027"
	W1002 11:59:38.138995  384787 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:38.139025  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.139434  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.139469  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.141226  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1002 11:59:38.141643  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.142086  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.142104  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.142433  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.142609  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.144425  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.146525  384787 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:38.148187  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:38.148204  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:38.148227  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.152187  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152549  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.152575  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152783  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.152988  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.153139  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.153280  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.157114  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 11:59:38.157655  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.158192  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.158211  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.158619  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.159253  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.159290  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.159506  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I1002 11:59:38.159895  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.160383  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.160395  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.160727  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.160902  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.162835  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.164490  384787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:37.211498  384505 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504818 seconds
	I1002 11:59:37.211660  384505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:37.229976  384505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:37.759297  384505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:37.759467  384505 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-749860 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:59:38.284135  384505 kubeadm.go:322] [bootstrap-token] Using token: rt49x4.7033jvaiaszsonci
	I1002 11:59:38.285950  384505 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:38.286108  384505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:38.299290  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:38.306326  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:38.312137  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:38.320028  384505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:38.439411  384505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:38.704007  384505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:38.705937  384505 kubeadm.go:322] 
	I1002 11:59:38.706075  384505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:38.706096  384505 kubeadm.go:322] 
	I1002 11:59:38.706210  384505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:38.706221  384505 kubeadm.go:322] 
	I1002 11:59:38.706256  384505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:38.706341  384505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:38.706433  384505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:38.706448  384505 kubeadm.go:322] 
	I1002 11:59:38.706527  384505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:38.706614  384505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:38.706701  384505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:38.706712  384505 kubeadm.go:322] 
	I1002 11:59:38.706805  384505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 11:59:38.706898  384505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:38.706910  384505 kubeadm.go:322] 
	I1002 11:59:38.707003  384505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707134  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:38.707169  384505 kubeadm.go:322]     --control-plane 	  
	I1002 11:59:38.707179  384505 kubeadm.go:322] 
	I1002 11:59:38.707272  384505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:38.707283  384505 kubeadm.go:322] 
	I1002 11:59:38.707373  384505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707500  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:38.708451  384505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:38.708478  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:59:38.708501  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:38.710166  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:38.711596  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:38.725385  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:38.748155  384505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:38.748294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.748295  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-749860 minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.795585  384505 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:39.068200  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.166036  384787 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.166047  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:38.166063  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.169435  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.169903  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.169929  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.170098  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.170273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.170517  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.170711  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.177450  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I1002 11:59:38.178044  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.178596  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.178616  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.179009  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.179244  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.181209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.181596  384787 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.181613  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:38.181641  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.185272  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.185785  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.185813  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.186245  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.186539  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.186748  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.186938  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.337092  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:38.337129  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:38.379388  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.389992  384787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.390060  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:38.399264  384787 node_ready.go:49] node "embed-certs-487027" has status "Ready":"True"
	I1002 11:59:38.399295  384787 node_ready.go:38] duration metric: took 9.264648ms waiting for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.399308  384787 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:38.401885  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:38.401909  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:38.406757  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.438158  384787 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.458749  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.458784  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:38.517143  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.547128  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.547161  384787 pod_ready.go:81] duration metric: took 108.899374ms waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.547176  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744560  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.744587  384787 pod_ready.go:81] duration metric: took 197.40322ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744598  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852242  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.852277  384787 pod_ready.go:81] duration metric: took 107.671499ms waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852294  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.017545  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638113738s)
	I1002 11:59:41.017602  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017613  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017597  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.627499125s)
	I1002 11:59:41.017658  384787 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:41.017718  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.610925223s)
	I1002 11:59:41.017747  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017759  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017907  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.017960  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.017977  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017994  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018535  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018549  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018559  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.018568  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018636  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018645  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018679  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019046  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019049  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.019064  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.027153  384787 pod_ready.go:102] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.049978  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.050007  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.050369  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.050391  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.100800  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.583606696s)
	I1002 11:59:41.100870  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.100900  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101237  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101258  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101268  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.101278  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101576  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.101621  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101634  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101647  384787 addons.go:467] Verifying addon metrics-server=true in "embed-certs-487027"
	I1002 11:59:41.103637  384787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:37.222165  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:39.223800  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.105142  384787 addons.go:502] enable addons completed in 3.007188775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:41.492039  384787 pod_ready.go:92] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.492067  384787 pod_ready.go:81] duration metric: took 2.639765498s waiting for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.492081  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500950  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.500979  384787 pod_ready.go:81] duration metric: took 8.889098ms waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500990  384787 pod_ready.go:38] duration metric: took 3.101668727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:41.501012  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:41.501079  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:41.533141  384787 api_server.go:72] duration metric: took 3.401757173s to wait for apiserver process to appear ...
	I1002 11:59:41.533167  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:41.533183  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:59:41.543027  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:59:41.545456  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:41.545483  384787 api_server.go:131] duration metric: took 12.308941ms to wait for apiserver health ...
	I1002 11:59:41.545494  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:41.556090  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:41.556183  384787 system_pods.go:61] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.556209  384787 system_pods.go:61] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.556247  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.556272  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.556290  384787 system_pods.go:61] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.556306  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.556329  384787 system_pods.go:61] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.556366  384787 system_pods.go:61] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.556392  384787 system_pods.go:74] duration metric: took 10.889958ms to wait for pod list to return data ...
	I1002 11:59:41.556412  384787 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:41.594659  384787 default_sa.go:45] found service account: "default"
	I1002 11:59:41.594690  384787 default_sa.go:55] duration metric: took 38.261546ms for default service account to be created ...
	I1002 11:59:41.594701  384787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:41.800342  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:41.800375  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.800382  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.800388  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.800393  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.800397  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.800401  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.800407  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.800412  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.800431  384787 retry.go:31] will retry after 300.830497ms: missing components: kube-dns
	I1002 11:59:42.116978  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.117028  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.117039  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.117048  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.117058  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.117064  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.117071  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.117080  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.117089  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.117109  384787 retry.go:31] will retry after 380.49084ms: missing components: kube-dns
	I1002 11:59:42.506867  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.506901  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.506908  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.506914  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.506919  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.506923  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.506927  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.506933  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.506939  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.506954  384787 retry.go:31] will retry after 409.062449ms: missing components: kube-dns
	I1002 11:59:42.924401  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.924443  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.924456  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.924464  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.924471  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.924477  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.924484  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.924493  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.924503  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.924524  384787 retry.go:31] will retry after 544.758887ms: missing components: kube-dns
	I1002 11:59:43.477592  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:43.477622  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Running
	I1002 11:59:43.477628  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:43.477632  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:43.477637  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:43.477640  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:43.477645  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:43.477651  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:43.477657  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Running
	I1002 11:59:43.477665  384787 system_pods.go:126] duration metric: took 1.882959518s to wait for k8s-apps to be running ...
	I1002 11:59:43.477672  384787 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:43.477714  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:43.492105  384787 system_svc.go:56] duration metric: took 14.416995ms WaitForService to wait for kubelet.
	I1002 11:59:43.492138  384787 kubeadm.go:581] duration metric: took 5.360761991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:43.492161  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:43.496739  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:43.496769  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:43.496785  384787 node_conditions.go:105] duration metric: took 4.61842ms to run NodePressure ...
	I1002 11:59:43.496801  384787 start.go:228] waiting for startup goroutines ...
	I1002 11:59:43.496810  384787 start.go:233] waiting for cluster config update ...
	I1002 11:59:43.496823  384787 start.go:242] writing updated cluster config ...
	I1002 11:59:43.497156  384787 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:43.568627  384787 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:43.570324  384787 out.go:177] * Done! kubectl is now configured to use "embed-certs-487027" cluster and "default" namespace by default
	I1002 11:59:39.194035  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:39.810338  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.310222  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.809912  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.310004  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.810506  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.309581  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.810312  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.310294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.809602  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.722699  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.221300  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.309927  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:44.810169  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.310095  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.809546  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.310144  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.809605  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.310487  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.809697  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.309464  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.809680  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.723036  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.220863  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:51.221417  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.310000  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:49.809922  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.310214  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.809728  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.309659  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.809723  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.309837  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.809788  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.309655  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.809468  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.310103  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.810421  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.968150  384505 kubeadm.go:1081] duration metric: took 16.219921091s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:54.968184  384505 kubeadm.go:406] StartCluster complete in 5m46.426951815s
	I1002 11:59:54.968203  384505 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.968302  384505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:54.970101  384505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.970429  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:54.970599  384505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:54.970672  384505 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970692  384505 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-749860"
	W1002 11:59:54.970703  384505 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:54.970723  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:59:54.970753  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.970775  384505 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970792  384505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-749860"
	I1002 11:59:54.971196  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971204  384505 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-749860"
	I1002 11:59:54.971226  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971199  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971240  384505 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-749860"
	W1002 11:59:54.971251  384505 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:54.971281  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971297  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.971669  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971707  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.989112  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 11:59:54.989701  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.989819  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1002 11:59:54.989971  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 11:59:54.990503  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990552  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990574  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.990592  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.990975  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991042  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991062  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991094  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991110  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991327  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:54.991555  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991596  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.992169  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992183  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992197  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.992206  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.998018  384505 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-749860"
	W1002 11:59:54.998043  384505 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:54.998067  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.998716  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.003322  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.020037  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1002 11:59:55.020659  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.021292  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.021313  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.021707  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.021896  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.022155  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1002 11:59:55.022286  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I1002 11:59:55.022697  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024740  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.024793  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024824  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.024839  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.027065  384505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:55.025237  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.025561  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.028415  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.028568  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:55.028579  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:55.028596  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.028867  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.029051  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.030397  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.030424  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.031461  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.033181  384505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:55.032032  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.032651  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.034670  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.034698  384505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.034703  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.034711  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:55.034727  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.034894  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.035089  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.035269  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.046534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046573  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.046599  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046629  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.046888  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.047102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.047276  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.051887  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1002 11:59:55.052372  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.052940  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.052970  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.053349  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.053558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.055503  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.055762  384505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.055780  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:55.055805  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.062494  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062526  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.062542  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062550  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.062752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.062922  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.063162  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.103907  384505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-749860" context rescaled to 1 replicas
	I1002 11:59:55.103958  384505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:55.105626  384505 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:53.722331  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:54.914848  384344 pod_ready.go:81] duration metric: took 4m0.000973055s waiting for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:54.914899  384344 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:54.914926  384344 pod_ready.go:38] duration metric: took 4m12.745047876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:54.914963  384344 kubeadm.go:640] restartCluster took 4m32.83554771s
	W1002 11:59:54.915062  384344 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:54.915098  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:55.106948  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:55.283274  384505 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.283336  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:55.291603  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:55.291629  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:55.297775  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.321901  384505 node_ready.go:49] node "old-k8s-version-749860" has status "Ready":"True"
	I1002 11:59:55.321927  384505 node_ready.go:38] duration metric: took 38.615436ms waiting for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.321939  384505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:55.327570  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.355612  384505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:55.357164  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:55.357187  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:55.423852  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:55.423883  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:55.477683  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:56.041846  384505 start.go:923] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:56.230394  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230432  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230466  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230488  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230810  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230869  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230888  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230913  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230936  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230890  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230969  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230990  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.231024  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.231326  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231341  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231652  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231667  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231740  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.327260  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.327289  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.327633  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.327654  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547462  384505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.069727635s)
	I1002 11:59:56.547536  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.547549  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.547901  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.547948  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.547974  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547993  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.548010  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.548288  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.548321  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.548322  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.548333  384505 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-749860"
	I1002 11:59:56.550084  384505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:56.551798  384505 addons.go:502] enable addons completed in 1.581195105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:57.554993  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:59.933613  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:01.937565  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:04.431925  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:05.433988  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.434013  384505 pod_ready.go:81] duration metric: took 10.078369703s waiting for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.434029  384505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441501  384505 pod_ready.go:92] pod "kube-proxy-mdtp5" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.441534  384505 pod_ready.go:81] duration metric: took 7.496823ms waiting for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441543  384505 pod_ready.go:38] duration metric: took 10.1195912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:05.441592  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:05.441680  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:05.460054  384505 api_server.go:72] duration metric: took 10.356049869s to wait for apiserver process to appear ...
	I1002 12:00:05.460080  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:05.460100  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 12:00:05.466796  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 12:00:05.467813  384505 api_server.go:141] control plane version: v1.16.0
	I1002 12:00:05.467845  384505 api_server.go:131] duration metric: took 7.75678ms to wait for apiserver health ...
	I1002 12:00:05.467855  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:05.472349  384505 system_pods.go:59] 4 kube-system pods found
	I1002 12:00:05.472384  384505 system_pods.go:61] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.472391  384505 system_pods.go:61] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.472401  384505 system_pods.go:61] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.472410  384505 system_pods.go:61] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.472433  384505 system_pods.go:74] duration metric: took 4.569442ms to wait for pod list to return data ...
	I1002 12:00:05.472446  384505 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:05.476327  384505 default_sa.go:45] found service account: "default"
	I1002 12:00:05.476349  384505 default_sa.go:55] duration metric: took 3.895344ms for default service account to be created ...
	I1002 12:00:05.476357  384505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:05.480522  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.480545  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.480550  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.480557  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.480563  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.480579  384505 retry.go:31] will retry after 270.891275ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:05.757515  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.757555  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.757563  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.757574  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.757585  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.757603  384505 retry.go:31] will retry after 336.725562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.099945  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.099978  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.099985  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.099995  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.100002  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.100024  384505 retry.go:31] will retry after 389.53153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.504317  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.504354  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.504362  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.504375  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.504385  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.504407  384505 retry.go:31] will retry after 453.465732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.962509  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.962534  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.962539  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.962546  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.962552  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.962568  384505 retry.go:31] will retry after 489.820063ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:07.457422  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:07.457451  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:07.457456  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:07.457465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:07.457472  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:07.457490  384505 retry.go:31] will retry after 931.079053ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:08.394500  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:08.394527  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:08.394532  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:08.394538  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:08.394546  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:08.394562  384505 retry.go:31] will retry after 929.512162ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:09.216426  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.301296702s)
	I1002 12:00:09.216493  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:09.230712  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:00:09.239588  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:00:09.248624  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:00:09.248677  384344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:00:09.466935  384344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:00:09.329677  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:09.329709  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:09.329714  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:09.329722  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:09.329728  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:09.329746  384505 retry.go:31] will retry after 898.08397ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:10.232119  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:10.232155  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:10.232163  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:10.232176  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:10.232185  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:10.232212  384505 retry.go:31] will retry after 1.809149678s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:12.047424  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:12.047452  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:12.047458  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:12.047465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:12.047471  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:12.047487  384505 retry.go:31] will retry after 2.054960799s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:14.109048  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:14.109080  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:14.109088  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:14.109098  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:14.109108  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:14.109128  384505 retry.go:31] will retry after 2.523219254s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:16.640373  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:16.640399  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:16.640405  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:16.640412  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:16.640419  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:16.640436  384505 retry.go:31] will retry after 2.61022195s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:19.606412  384344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:00:19.606505  384344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:00:19.606620  384344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:00:19.606760  384344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:00:19.606856  384344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:00:19.606912  384344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:00:19.608541  384344 out.go:204]   - Generating certificates and keys ...
	I1002 12:00:19.608638  384344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:00:19.608743  384344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:00:19.608891  384344 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 12:00:19.608999  384344 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 12:00:19.609113  384344 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 12:00:19.609193  384344 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 12:00:19.609276  384344 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 12:00:19.609360  384344 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 12:00:19.609453  384344 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 12:00:19.609548  384344 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 12:00:19.609624  384344 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 12:00:19.609694  384344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:00:19.609761  384344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:00:19.609833  384344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:00:19.609916  384344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:00:19.609991  384344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:00:19.610100  384344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:00:19.610182  384344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:00:19.611696  384344 out.go:204]   - Booting up control plane ...
	I1002 12:00:19.611810  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:00:19.611916  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:00:19.612021  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:00:19.612173  384344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:00:19.612294  384344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:00:19.612346  384344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:00:19.612576  384344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:00:19.612683  384344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 12:00:19.612825  384344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:00:19.612943  384344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:00:19.613026  384344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:00:19.613215  384344 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-304121 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:00:19.613266  384344 kubeadm.go:322] [bootstrap-token] Using token: pd40pp.2tkeaw4x1d1qfkq9
	I1002 12:00:19.614472  384344 out.go:204]   - Configuring RBAC rules ...
	I1002 12:00:19.614593  384344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:00:19.614706  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:00:19.614912  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:00:19.615054  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:00:19.615220  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:00:19.615315  384344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:00:19.615474  384344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:00:19.615540  384344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:00:19.615622  384344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:00:19.615633  384344 kubeadm.go:322] 
	I1002 12:00:19.615725  384344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:00:19.615747  384344 kubeadm.go:322] 
	I1002 12:00:19.615851  384344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:00:19.615864  384344 kubeadm.go:322] 
	I1002 12:00:19.615894  384344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:00:19.615997  384344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:00:19.616084  384344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:00:19.616094  384344 kubeadm.go:322] 
	I1002 12:00:19.616143  384344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:00:19.616152  384344 kubeadm.go:322] 
	I1002 12:00:19.616222  384344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:00:19.616240  384344 kubeadm.go:322] 
	I1002 12:00:19.616321  384344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:00:19.616420  384344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:00:19.616532  384344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:00:19.616548  384344 kubeadm.go:322] 
	I1002 12:00:19.616640  384344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:00:19.616734  384344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:00:19.616743  384344 kubeadm.go:322] 
	I1002 12:00:19.616857  384344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617005  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:00:19.617049  384344 kubeadm.go:322] 	--control-plane 
	I1002 12:00:19.617059  384344 kubeadm.go:322] 
	I1002 12:00:19.617136  384344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:00:19.617142  384344 kubeadm.go:322] 
	I1002 12:00:19.617238  384344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617333  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:00:19.617371  384344 cni.go:84] Creating CNI manager for ""
	I1002 12:00:19.617384  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:00:19.618962  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:00:19.620215  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:00:19.650698  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:00:19.699458  384344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:00:19.699594  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=no-preload-304121 minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.699598  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.810984  384344 ops.go:34] apiserver oom_adj: -16
	I1002 12:00:20.114460  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.245669  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.876563  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.256294  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:19.256319  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:19.256325  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:19.256332  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:19.256338  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:19.256355  384505 retry.go:31] will retry after 3.270215577s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:22.532684  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:22.532714  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:22.532723  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:22.532730  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:22.532737  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:22.532754  384505 retry.go:31] will retry after 5.273561216s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:21.376620  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:21.876453  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.376537  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.876967  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.377242  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.876469  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.376391  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.877422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.376422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.877251  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.810777  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:27.810810  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:27.810816  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:27.810822  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:27.810828  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:27.810845  384505 retry.go:31] will retry after 6.34425242s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:26.376388  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:26.877267  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.376480  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.877214  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.376560  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.876964  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.377314  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.877135  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.377301  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.876525  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.376660  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.876991  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.376934  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.584774  384344 kubeadm.go:1081] duration metric: took 12.88524826s to wait for elevateKubeSystemPrivileges.
	I1002 12:00:32.584821  384344 kubeadm.go:406] StartCluster complete in 5m10.55691254s
	I1002 12:00:32.584849  384344 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.584955  384344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:00:32.587722  384344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.588018  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:00:32.588146  384344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:00:32.588230  384344 addons.go:69] Setting default-storageclass=true in profile "no-preload-304121"
	I1002 12:00:32.588251  384344 addons.go:69] Setting metrics-server=true in profile "no-preload-304121"
	I1002 12:00:32.588265  384344 addons.go:231] Setting addon metrics-server=true in "no-preload-304121"
	W1002 12:00:32.588273  384344 addons.go:240] addon metrics-server should already be in state true
	I1002 12:00:32.588252  384344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-304121"
	I1002 12:00:32.588323  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:00:32.588333  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588229  384344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-304121"
	I1002 12:00:32.588387  384344 addons.go:231] Setting addon storage-provisioner=true in "no-preload-304121"
	W1002 12:00:32.588397  384344 addons.go:240] addon storage-provisioner should already be in state true
	I1002 12:00:32.588433  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588695  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588731  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588737  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588777  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588867  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588891  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.612093  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I1002 12:00:32.612118  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I1002 12:00:32.612252  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1002 12:00:32.612652  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612799  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612847  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.613307  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613337  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613432  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613504  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613715  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.613718  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613838  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613955  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.614146  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614197  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614802  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.614842  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.615497  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.615534  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.617844  384344 addons.go:231] Setting addon default-storageclass=true in "no-preload-304121"
	W1002 12:00:32.617884  384344 addons.go:240] addon default-storageclass should already be in state true
	I1002 12:00:32.617914  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.618326  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.618436  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.634123  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I1002 12:00:32.634849  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.634953  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 12:00:32.635328  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.635470  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635495  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635819  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635841  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635867  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636193  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636340  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.636373  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.636435  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.637717  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1002 12:00:32.638051  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.640160  384344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 12:00:32.642288  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 12:00:32.642300  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 12:00:32.642314  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.640240  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.642837  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.642863  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.643527  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.643695  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.645514  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.645565  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.648157  384344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:00:32.645977  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.646152  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.650297  384344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.650313  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:00:32.650328  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.650380  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.650547  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.650823  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.650961  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.653953  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654560  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.654592  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654886  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.655049  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.655195  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.655410  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.658005  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1002 12:00:32.658525  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.659046  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.659059  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.659478  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.659611  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.661708  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.661982  384344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:32.661998  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:00:32.662018  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.664637  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665005  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.665023  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665161  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.665335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.665426  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.665610  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.723429  384344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-304121" context rescaled to 1 replicas
	I1002 12:00:32.723469  384344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:00:32.725329  384344 out.go:177] * Verifying Kubernetes components...
	I1002 12:00:32.726924  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:32.860425  384344 node_ready.go:35] waiting up to 6m0s for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.860515  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:00:32.904658  384344 node_ready.go:49] node "no-preload-304121" has status "Ready":"True"
	I1002 12:00:32.904689  384344 node_ready.go:38] duration metric: took 44.230643ms waiting for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.904705  384344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:32.949887  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:32.984050  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.997841  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 12:00:32.997869  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 12:00:32.999235  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:33.082015  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 12:00:33.082051  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 12:00:33.326524  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:33.326554  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 12:00:33.403533  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:34.844716  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984135314s)
	I1002 12:00:34.844752  384344 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 12:00:35.114639  384344 pod_ready.go:102] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:35.538571  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.55447937s)
	I1002 12:00:35.538624  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538641  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.538652  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.539381648s)
	I1002 12:00:35.538700  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538713  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539005  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539027  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539039  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539049  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539137  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539162  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539176  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539194  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539203  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539299  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539328  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539341  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539537  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539588  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539622  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.596015  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.596048  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.596384  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.596431  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.596449  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.641915  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.238327482s)
	I1002 12:00:35.641985  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642007  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642363  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642389  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642399  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642409  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642423  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.642716  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642739  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642750  384344 addons.go:467] Verifying addon metrics-server=true in "no-preload-304121"
	I1002 12:00:35.644696  384344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 12:00:35.646046  384344 addons.go:502] enable addons completed in 3.05790546s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 12:00:36.113386  384344 pod_ready.go:92] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.113415  384344 pod_ready.go:81] duration metric: took 3.163496821s waiting for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.113429  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.116264  384344 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116290  384344 pod_ready.go:81] duration metric: took 2.85415ms waiting for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	E1002 12:00:36.116300  384344 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116306  384344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126555  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.126575  384344 pod_ready.go:81] duration metric: took 10.262082ms waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126583  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137876  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.137903  384344 pod_ready.go:81] duration metric: took 11.312511ms waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137916  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146526  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.146549  384344 pod_ready.go:81] duration metric: took 8.624341ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146561  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307205  384344 pod_ready.go:92] pod "kube-proxy-sprhm" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.307231  384344 pod_ready.go:81] duration metric: took 160.663088ms waiting for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307241  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707429  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.707455  384344 pod_ready.go:81] duration metric: took 400.207608ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707463  384344 pod_ready.go:38] duration metric: took 3.802745796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:36.707480  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:36.707537  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:36.733934  384344 api_server.go:72] duration metric: took 4.010431274s to wait for apiserver process to appear ...
	I1002 12:00:36.733962  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:36.733979  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 12:00:36.740562  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 12:00:36.742234  384344 api_server.go:141] control plane version: v1.28.2
	I1002 12:00:36.742259  384344 api_server.go:131] duration metric: took 8.289515ms to wait for apiserver health ...
	I1002 12:00:36.742270  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:36.910934  384344 system_pods.go:59] 8 kube-system pods found
	I1002 12:00:36.910962  384344 system_pods.go:61] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:36.910967  384344 system_pods.go:61] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:36.910971  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:36.910976  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:36.910980  384344 system_pods.go:61] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:36.910983  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:36.910991  384344 system_pods.go:61] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:36.911002  384344 system_pods.go:61] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 12:00:36.911013  384344 system_pods.go:74] duration metric: took 168.734676ms to wait for pod list to return data ...
	I1002 12:00:36.911027  384344 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:37.106994  384344 default_sa.go:45] found service account: "default"
	I1002 12:00:37.107038  384344 default_sa.go:55] duration metric: took 196.001935ms for default service account to be created ...
	I1002 12:00:37.107050  384344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:37.310973  384344 system_pods.go:86] 8 kube-system pods found
	I1002 12:00:37.311012  384344 system_pods.go:89] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:37.311021  384344 system_pods.go:89] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:37.311028  384344 system_pods.go:89] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:37.311034  384344 system_pods.go:89] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:37.311041  384344 system_pods.go:89] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:37.311049  384344 system_pods.go:89] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:37.311060  384344 system_pods.go:89] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:37.311075  384344 system_pods.go:89] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Running
	I1002 12:00:37.311093  384344 system_pods.go:126] duration metric: took 204.035391ms to wait for k8s-apps to be running ...
	I1002 12:00:37.311103  384344 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:00:37.311158  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:37.327711  384344 system_svc.go:56] duration metric: took 16.597865ms WaitForService to wait for kubelet.
	I1002 12:00:37.327736  384344 kubeadm.go:581] duration metric: took 4.604243467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:00:37.327758  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:00:37.506633  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:00:37.506693  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 12:00:37.506708  384344 node_conditions.go:105] duration metric: took 178.94359ms to run NodePressure ...
	I1002 12:00:37.506722  384344 start.go:228] waiting for startup goroutines ...
	I1002 12:00:37.506728  384344 start.go:233] waiting for cluster config update ...
	I1002 12:00:37.506738  384344 start.go:242] writing updated cluster config ...
	I1002 12:00:37.506999  384344 ssh_runner.go:195] Run: rm -f paused
	I1002 12:00:37.558171  384344 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:00:37.560280  384344 out.go:177] * Done! kubectl is now configured to use "no-preload-304121" cluster and "default" namespace by default
	I1002 12:00:34.160478  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:34.160520  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:34.160528  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:34.160540  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:34.160553  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:34.160577  384505 retry.go:31] will retry after 8.056057378s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:42.223209  384505 system_pods.go:86] 5 kube-system pods found
	I1002 12:00:42.223242  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:42.223251  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Pending
	I1002 12:00:42.223257  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:42.223267  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:42.223276  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:42.223299  384505 retry.go:31] will retry after 9.279474557s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:51.510907  384505 system_pods.go:86] 6 kube-system pods found
	I1002 12:00:51.510937  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:51.510945  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:00:51.510949  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Pending
	I1002 12:00:51.510953  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:51.510959  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:51.510965  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:51.510995  384505 retry.go:31] will retry after 9.19295244s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:01:00.712167  384505 system_pods.go:86] 8 kube-system pods found
	I1002 12:01:00.712195  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:01:00.712201  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:01:00.712205  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Running
	I1002 12:01:00.712209  384505 system_pods.go:89] "kube-controller-manager-old-k8s-version-749860" [1531e118-f1f1-485e-b258-32e21b3385d8] Running
	I1002 12:01:00.712213  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:01:00.712217  384505 system_pods.go:89] "kube-scheduler-old-k8s-version-749860" [66983e5c-64ab-48ec-9c24-824f0a7cb36e] Running
	I1002 12:01:00.712223  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:01:00.712230  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:01:00.712237  384505 system_pods.go:126] duration metric: took 55.235875161s to wait for k8s-apps to be running ...
	I1002 12:01:00.712244  384505 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:01:00.712293  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:01:00.728970  384505 system_svc.go:56] duration metric: took 16.712185ms WaitForService to wait for kubelet.
	I1002 12:01:00.728999  384505 kubeadm.go:581] duration metric: took 1m5.625005524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:01:00.729026  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:01:00.733153  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:01:00.733180  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 12:01:00.733196  384505 node_conditions.go:105] duration metric: took 4.162147ms to run NodePressure ...
	I1002 12:01:00.733209  384505 start.go:228] waiting for startup goroutines ...
	I1002 12:01:00.733216  384505 start.go:233] waiting for cluster config update ...
	I1002 12:01:00.733230  384505 start.go:242] writing updated cluster config ...
	I1002 12:01:00.733553  384505 ssh_runner.go:195] Run: rm -f paused
	I1002 12:01:00.784237  384505 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 12:01:00.786178  384505 out.go:177] 
	W1002 12:01:00.787686  384505 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 12:01:00.789104  384505 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 12:01:00.790521  384505 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-749860" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:52 UTC, ends at Mon 2023-10-02 12:09:39 UTC. --
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.306477346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248579306467014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=249f9801-01c3-414c-83d1-ebcad6b2cd72 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.307318441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aeb597a9-4f65-42cf-ad16-e8b324407f2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.307368718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aeb597a9-4f65-42cf-ad16-e8b324407f2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.307525718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aeb597a9-4f65-42cf-ad16-e8b324407f2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.355812566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2fcfc573-7bdb-4f9b-8151-23314c88a9ee name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.355896174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2fcfc573-7bdb-4f9b-8151-23314c88a9ee name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.357530605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=782e9c66-b5ab-4177-bbe6-877b96f002b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.357948477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248579357934084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=782e9c66-b5ab-4177-bbe6-877b96f002b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.358866483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6695bcc6-7fd4-4c62-9a64-3600eaf94552 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.358926093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6695bcc6-7fd4-4c62-9a64-3600eaf94552 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.360166079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6695bcc6-7fd4-4c62-9a64-3600eaf94552 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.400696080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=70aba1c5-d7d0-4d6a-a155-8bc23d2bfd80 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.400752721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=70aba1c5-d7d0-4d6a-a155-8bc23d2bfd80 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.401834286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dd7f3aab-2304-4033-a81c-4609685866be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.402375957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248579402350154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=dd7f3aab-2304-4033-a81c-4609685866be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.402917356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7da77360-4a01-4e89-bc0d-3aeb0ce74466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.403089531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7da77360-4a01-4e89-bc0d-3aeb0ce74466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.403273590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7da77360-4a01-4e89-bc0d-3aeb0ce74466 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.439363231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cf5f18ad-3d7a-4954-9573-269055139766 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.439422438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cf5f18ad-3d7a-4954-9573-269055139766 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.440975576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=658d9b88-eefb-442d-8084-9a323451dfc7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.441356517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248579441343550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=658d9b88-eefb-442d-8084-9a323451dfc7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.441996554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66a4264c-09a1-4b41-85cb-7422d2ed2583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.442148444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66a4264c-09a1-4b41-85cb-7422d2ed2583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:09:39 no-preload-304121 crio[729]: time="2023-10-02 12:09:39.442306031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66a4264c-09a1-4b41-85cb-7422d2ed2583 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e172cd6aafba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ec313f0f0ab1d       storage-provisioner
	587b3cddeef4c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   ecbb6c9a8e481       kube-proxy-sprhm
	9a9443897a0d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   2a75f65496e4c       coredns-5dd5756b68-st2bd
	eef0f8b845289       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   32c3582344c54       etcd-no-preload-304121
	ea6899c47560e       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   53f5e0765e7c0       kube-controller-manager-no-preload-304121
	b97038dd3f301       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   58547f6befd05       kube-apiserver-no-preload-304121
	b0e8d031a5174       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   77f10f7dc88d2       kube-scheduler-no-preload-304121
	
	* 
	* ==> coredns [9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38339 - 61925 "HINFO IN 947553240253786351.7225225166883565728. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014792553s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-304121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-304121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=no-preload-304121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-304121
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:09:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:05:46 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:05:46 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:05:46 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:05:46 +0000   Mon, 02 Oct 2023 12:00:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    no-preload-304121
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e666d702cd1a476db2e4ede71244eec6
	  System UUID:                e666d702-cd1a-476d-b2e4-ede71244eec6
	  Boot ID:                    edd92e65-3aab-40f3-a2ba-c9b9a2a278d4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-st2bd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-304121                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-304121             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-no-preload-304121    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-sprhm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-304121             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-6c2hc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-304121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-304121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-304121 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node no-preload-304121 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m20s  kubelet          Node no-preload-304121 status is now: NodeReady
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-304121 event: Registered Node no-preload-304121 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076201] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.917880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238771] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149063] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.575254] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 2 11:55] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.115809] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.141096] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.114280] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.264121] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +31.895669] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
	[ +19.888493] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 2 12:00] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[  +8.813071] systemd-fstab-generator[4165]: Ignoring "noauto" for root device
	[ +13.576539] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904] <==
	* {"level":"info","ts":"2023-10-02T12:00:14.000528Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2023-10-02T12:00:14.00113Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"be0eebdc09990bfd","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-10-02T12:00:14.001481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd switched to configuration voters=(13695142847166614525)"}
	{"level":"info","ts":"2023-10-02T12:00:14.001593Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","added-peer-id":"be0eebdc09990bfd","added-peer-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2023-10-02T12:00:14.001635Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.001652Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.001659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.156118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgPreVoteResp from be0eebdc09990bfd at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgVoteResp from be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became leader at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be0eebdc09990bfd elected leader be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.161251Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"be0eebdc09990bfd","local-member-attributes":"{Name:no-preload-304121 ClientURLs:[https://192.168.39.143:2379]}","request-path":"/0/members/be0eebdc09990bfd/attributes","cluster-id":"6857887556ef56db","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:00:14.161434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:00:14.162159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:00:14.162843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	{"level":"info","ts":"2023-10-02T12:00:14.162968Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.180691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T12:00:14.18087Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T12:00:14.167989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T12:00:14.181755Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.182154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.182307Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  12:09:39 up 14 min,  0 users,  load average: 0.26, 0.35, 0.25
	Linux no-preload-304121 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d] <==
	* W1002 12:05:17.121674       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:05:17.121783       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:05:17.121839       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:05:17.121696       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:05:17.121960       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:05:17.123279       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:06:15.955356       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:06:17.122848       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:06:17.122909       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:06:17.122927       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:06:17.124150       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:06:17.124247       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:06:17.124255       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:07:15.955833       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:08:15.954698       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:08:17.123234       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:08:17.123345       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:08:17.123383       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:08:17.124510       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:08:17.124619       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:08:17.124655       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:09:15.954701       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7] <==
	* I1002 12:04:14.710595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="120.916µs"
	E1002 12:04:32.606465       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:04:32.956972       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:02.612592       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:02.966298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:05:32.618776       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:05:32.978182       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:06:02.625615       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:02.988786       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:06:32.631749       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:06:32.996893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:06:50.711219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="499.804µs"
	E1002 12:07:02.637786       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:03.005954       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:07:05.713289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="273.644µs"
	E1002 12:07:32.644137       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:07:33.014447       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:08:02.654119       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:08:03.023919       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:08:32.660293       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:08:33.034142       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:09:02.666413       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:09:03.043849       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:09:32.672774       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:09:33.053469       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e] <==
	* I1002 12:00:35.828075       1 server_others.go:69] "Using iptables proxy"
	I1002 12:00:35.852368       1 node.go:141] Successfully retrieved node IP: 192.168.39.143
	I1002 12:00:36.485825       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 12:00:36.485868       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 12:00:36.499417       1 server_others.go:152] "Using iptables Proxier"
	I1002 12:00:36.499508       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 12:00:36.499882       1 server.go:846] "Version info" version="v1.28.2"
	I1002 12:00:36.499894       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:00:36.508146       1 config.go:188] "Starting service config controller"
	I1002 12:00:36.508518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 12:00:36.510054       1 config.go:97] "Starting endpoint slice config controller"
	I1002 12:00:36.510111       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 12:00:36.510639       1 config.go:315] "Starting node config controller"
	I1002 12:00:36.512509       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 12:00:36.609803       1 shared_informer.go:318] Caches are synced for service config
	I1002 12:00:36.610995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 12:00:36.612965       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9] <==
	* W1002 12:00:17.002316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 12:00:17.002386       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 12:00:17.076757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 12:00:17.076833       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 12:00:17.105133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 12:00:17.105182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 12:00:17.112341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.112429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.142172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 12:00:17.142255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 12:00:17.189433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.189496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.206845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 12:00:17.206941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 12:00:17.356487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 12:00:17.356601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 12:00:17.365169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.365250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.366412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.366476       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.376241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 12:00:17.376285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 12:00:17.408212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 12:00:17.408268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1002 12:00:17.871792       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:52 UTC, ends at Mon 2023-10-02 12:09:40 UTC. --
	Oct 02 12:07:05 no-preload-304121 kubelet[4172]: E1002 12:07:05.691472    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:07:16 no-preload-304121 kubelet[4172]: E1002 12:07:16.690322    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:07:19 no-preload-304121 kubelet[4172]: E1002 12:07:19.802563    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:07:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:07:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:07:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:07:27 no-preload-304121 kubelet[4172]: E1002 12:07:27.691765    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:07:39 no-preload-304121 kubelet[4172]: E1002 12:07:39.691645    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:07:50 no-preload-304121 kubelet[4172]: E1002 12:07:50.691570    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:08:03 no-preload-304121 kubelet[4172]: E1002 12:08:03.690672    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:08:18 no-preload-304121 kubelet[4172]: E1002 12:08:18.691612    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:08:19 no-preload-304121 kubelet[4172]: E1002 12:08:19.800608    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:08:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:08:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:08:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:08:32 no-preload-304121 kubelet[4172]: E1002 12:08:32.691531    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:08:46 no-preload-304121 kubelet[4172]: E1002 12:08:46.693283    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:08:57 no-preload-304121 kubelet[4172]: E1002 12:08:57.691322    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:09:11 no-preload-304121 kubelet[4172]: E1002 12:09:11.690751    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:09:19 no-preload-304121 kubelet[4172]: E1002 12:09:19.800933    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:09:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:09:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:09:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:09:23 no-preload-304121 kubelet[4172]: E1002 12:09:23.691114    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:09:37 no-preload-304121 kubelet[4172]: E1002 12:09:37.692206    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	
	* 
	* ==> storage-provisioner [6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58] <==
	* I1002 12:00:36.743348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 12:00:36.759783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 12:00:36.759882       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 12:00:36.771371       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 12:00:36.772119       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a698911-938c-4466-9c61-c594ff009531", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6 became leader
	I1002 12:00:36.772464       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6!
	I1002 12:00:36.873311       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-304121 -n no-preload-304121
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-304121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6c2hc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc: exit status 1 (70.601016ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6c2hc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 12:01:30.122777  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 12:01:55.305982  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 12:02:17.886861  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 12:02:22.002266  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 12:02:53.167301  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 12:03:18.359946  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 12:03:30.518436  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 12:03:45.047900  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 12:04:04.535616  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 12:04:14.659665  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 12:04:15.317847  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 12:04:26.888368  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 12:04:34.099737  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 12:04:53.563094  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 12:05:38.362524  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 12:05:49.934565  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 12:05:54.840323  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 12:05:57.144758  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 12:06:30.122165  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 12:06:55.305534  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 12:07:22.001358  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 12:08:30.518568  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-749860 -n old-k8s-version-749860
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:10:01.367366547 +0000 UTC m=+5651.939052607
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-749860 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-749860 logs -n 25: (1.751222849s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo cat                              | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:14.045882  384965 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:14.045995  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046005  384965 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:14.046009  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046207  384965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:50:14.046807  384965 out.go:303] Setting JSON to false
	I1002 11:50:14.047867  384965 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9160,"bootTime":1696238254,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:50:14.047937  384965 start.go:138] virtualization: kvm guest
	I1002 11:50:14.050148  384965 out.go:177] * [default-k8s-diff-port-777999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:50:14.051736  384965 notify.go:220] Checking for updates...
	I1002 11:50:14.051738  384965 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:14.053419  384965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:14.055001  384965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:50:14.056531  384965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:50:14.057828  384965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:50:14.059154  384965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:14.060884  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:50:14.061318  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.061365  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.077285  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1002 11:50:14.077670  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.078164  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.078184  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.078590  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.078766  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.079011  384965 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:14.079285  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.079321  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.093519  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 11:50:14.093897  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.094331  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.094375  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.094689  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.094875  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.127852  384965 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:50:14.129579  384965 start.go:298] selected driver: kvm2
	I1002 11:50:14.129589  384965 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.129734  384965 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:14.130441  384965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.130517  384965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:50:14.145313  384965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:50:14.145678  384965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:14.145737  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:50:14.145747  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:50:14.145754  384965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-77799
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.145885  384965 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.147697  384965 out.go:177] * Starting control plane node default-k8s-diff-port-777999 in cluster default-k8s-diff-port-777999
	I1002 11:50:14.518571  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:14.149188  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:50:14.149229  384965 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:50:14.149243  384965 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:14.149342  384965 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:50:14.149355  384965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:50:14.149469  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:50:14.149690  384965 start.go:365] acquiring machines lock for default-k8s-diff-port-777999: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:50:17.590603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:23.670608  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:26.742637  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:32.822640  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:35.894704  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:41.974682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:45.046703  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:51.126633  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:54.198624  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:00.278622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:03.350650  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:09.430627  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:12.502639  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:18.582668  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:21.654622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:27.734588  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:30.806674  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:36.886711  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:39.958677  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:46.038638  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:49.110583  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:55.190669  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:58.262632  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:04.342658  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:07.414733  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:13.494648  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:16.566610  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:22.646664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:25.718682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:31.798673  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:34.870620  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:40.950664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:44.022695  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:50.102629  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:53.174698  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:59.254603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:02.326684  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:08.406661  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:11.478769  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:17.558670  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:20.630696  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:26.710600  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:29.782676  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:35.862655  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:38.867149  384505 start.go:369] acquired machines lock for "old-k8s-version-749860" in 4m24.621828644s
	I1002 11:53:38.867251  384505 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:38.867260  384505 fix.go:54] fixHost starting: 
	I1002 11:53:38.867725  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:38.867761  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:38.882900  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1002 11:53:38.883484  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:38.883950  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:53:38.883974  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:38.884318  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:38.884530  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:38.884688  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:53:38.886067  384505 fix.go:102] recreateIfNeeded on old-k8s-version-749860: state=Stopped err=<nil>
	I1002 11:53:38.886102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	W1002 11:53:38.886288  384505 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:38.888401  384505 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-749860" ...
	I1002 11:53:38.889752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Start
	I1002 11:53:38.889924  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring networks are active...
	I1002 11:53:38.890638  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network default is active
	I1002 11:53:38.890980  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network mk-old-k8s-version-749860 is active
	I1002 11:53:38.891314  384505 main.go:141] libmachine: (old-k8s-version-749860) Getting domain xml...
	I1002 11:53:38.892257  384505 main.go:141] libmachine: (old-k8s-version-749860) Creating domain...
	I1002 11:53:38.864675  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:38.864716  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:53:38.866979  384344 machine.go:91] provisioned docker machine in 4m37.398507067s
	I1002 11:53:38.867033  384344 fix.go:56] fixHost completed within 4m37.419547722s
	I1002 11:53:38.867039  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 4m37.419568347s
	W1002 11:53:38.867080  384344 start.go:688] error starting host: provision: host is not running
	W1002 11:53:38.867230  384344 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:53:38.867240  384344 start.go:703] Will try again in 5 seconds ...
	I1002 11:53:40.120018  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting to get IP...
	I1002 11:53:40.120927  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.121258  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.121366  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.121241  385500 retry.go:31] will retry after 204.223254ms: waiting for machine to come up
	I1002 11:53:40.326895  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.327332  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.327351  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.327293  385500 retry.go:31] will retry after 300.58131ms: waiting for machine to come up
	I1002 11:53:40.629931  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.630293  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.630324  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.630247  385500 retry.go:31] will retry after 460.804681ms: waiting for machine to come up
	I1002 11:53:41.092440  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.092887  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.092914  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.092838  385500 retry.go:31] will retry after 573.592817ms: waiting for machine to come up
	I1002 11:53:41.668507  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.668916  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.668955  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.668879  385500 retry.go:31] will retry after 647.261387ms: waiting for machine to come up
	I1002 11:53:42.317738  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.318193  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.318228  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.318135  385500 retry.go:31] will retry after 643.115699ms: waiting for machine to come up
	I1002 11:53:42.963169  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.963572  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.963595  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.963517  385500 retry.go:31] will retry after 1.059074571s: waiting for machine to come up
	I1002 11:53:44.024372  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:44.024750  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:44.024785  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:44.024703  385500 retry.go:31] will retry after 1.142402067s: waiting for machine to come up
	I1002 11:53:43.868857  384344 start.go:365] acquiring machines lock for no-preload-304121: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:53:45.169146  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:45.169470  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:45.169509  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:45.169430  385500 retry.go:31] will retry after 1.244757741s: waiting for machine to come up
	I1002 11:53:46.415640  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:46.416049  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:46.416078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:46.416030  385500 retry.go:31] will retry after 2.066150597s: waiting for machine to come up
	I1002 11:53:48.483477  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:48.483998  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:48.484023  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:48.483921  385500 retry.go:31] will retry after 2.521584671s: waiting for machine to come up
	I1002 11:53:51.008090  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:51.008535  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:51.008565  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:51.008455  385500 retry.go:31] will retry after 2.896131667s: waiting for machine to come up
	I1002 11:53:53.905835  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:53.906274  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:53.906309  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:53.906207  385500 retry.go:31] will retry after 3.463250216s: waiting for machine to come up
	I1002 11:53:58.755219  384787 start.go:369] acquired machines lock for "embed-certs-487027" in 4m10.971064405s
	I1002 11:53:58.755286  384787 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:58.755301  384787 fix.go:54] fixHost starting: 
	I1002 11:53:58.755691  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:58.755733  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:58.772186  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38267
	I1002 11:53:58.772591  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:58.773071  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:53:58.773101  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:58.773409  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:58.773585  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:53:58.773710  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:53:58.775231  384787 fix.go:102] recreateIfNeeded on embed-certs-487027: state=Stopped err=<nil>
	I1002 11:53:58.775273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	W1002 11:53:58.775449  384787 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:58.778132  384787 out.go:177] * Restarting existing kvm2 VM for "embed-certs-487027" ...
	I1002 11:53:57.373844  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374176  384505 main.go:141] libmachine: (old-k8s-version-749860) Found IP for machine: 192.168.83.82
	I1002 11:53:57.374195  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserving static IP address...
	I1002 11:53:57.374208  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has current primary IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374680  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.374711  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | skip adding static IP to network mk-old-k8s-version-749860 - found existing host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"}
	I1002 11:53:57.374725  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserved static IP address: 192.168.83.82
	I1002 11:53:57.374741  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting for SSH to be available...
	I1002 11:53:57.374758  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Getting to WaitForSSH function...
	I1002 11:53:57.377368  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377757  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.377791  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377890  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH client type: external
	I1002 11:53:57.377933  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa (-rw-------)
	I1002 11:53:57.377976  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:53:57.377995  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | About to run SSH command:
	I1002 11:53:57.378008  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | exit 0
	I1002 11:53:57.474496  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | SSH cmd err, output: <nil>: 
	I1002 11:53:57.474881  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetConfigRaw
	I1002 11:53:57.475581  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.478078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478423  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.478464  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478679  384505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:53:57.478876  384505 machine.go:88] provisioning docker machine ...
	I1002 11:53:57.478895  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:57.479118  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479286  384505 buildroot.go:166] provisioning hostname "old-k8s-version-749860"
	I1002 11:53:57.479300  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479509  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.481462  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481768  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.481805  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481935  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.482138  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482280  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.482611  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.483038  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.483051  384505 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-749860 && echo "old-k8s-version-749860" | sudo tee /etc/hostname
	I1002 11:53:57.622724  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-749860
	
	I1002 11:53:57.622760  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.626222  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626663  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.626707  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626840  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.627102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627297  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627513  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.627678  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.628068  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.628089  384505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-749860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-749860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-749860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:53:57.767587  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:57.767664  384505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:53:57.767708  384505 buildroot.go:174] setting up certificates
	I1002 11:53:57.767721  384505 provision.go:83] configureAuth start
	I1002 11:53:57.767734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.768045  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.771158  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771591  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.771620  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771825  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.774031  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774444  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.774523  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774529  384505 provision.go:138] copyHostCerts
	I1002 11:53:57.774608  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:53:57.774623  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:53:57.774695  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:53:57.774787  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:53:57.774797  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:53:57.774821  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:53:57.774884  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:53:57.774891  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:53:57.774912  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:53:57.774970  384505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-749860 san=[192.168.83.82 192.168.83.82 localhost 127.0.0.1 minikube old-k8s-version-749860]
	I1002 11:53:58.003098  384505 provision.go:172] copyRemoteCerts
	I1002 11:53:58.003163  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:53:58.003190  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.005944  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006310  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.006345  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006482  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.006734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.006887  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.007049  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.099927  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:53:58.123424  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:53:58.147578  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:53:58.171190  384505 provision.go:86] duration metric: configureAuth took 403.448571ms
	I1002 11:53:58.171228  384505 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:53:58.171440  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:53:58.171575  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.174314  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174684  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.174723  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174860  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.175078  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175274  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175409  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.175596  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.175908  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.175923  384505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:53:58.491028  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:53:58.491062  384505 machine.go:91] provisioned docker machine in 1.012168334s
	I1002 11:53:58.491072  384505 start.go:300] post-start starting for "old-k8s-version-749860" (driver="kvm2")
	I1002 11:53:58.491085  384505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:53:58.491106  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.491521  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:53:58.491558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.494009  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494382  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.494415  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494546  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.494753  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.494903  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.495037  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.588465  384505 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:53:58.592844  384505 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:53:58.592872  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:53:58.592940  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:53:58.593047  384505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:53:58.593171  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:53:58.601583  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:53:58.624453  384505 start.go:303] post-start completed in 133.365398ms
	I1002 11:53:58.624486  384505 fix.go:56] fixHost completed within 19.757224844s
	I1002 11:53:58.624511  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.627104  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627476  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.627534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627695  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.627913  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628105  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628253  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.628426  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.628749  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.628762  384505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:53:58.755032  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247638.703145377
	
	I1002 11:53:58.755056  384505 fix.go:206] guest clock: 1696247638.703145377
	I1002 11:53:58.755066  384505 fix.go:219] Guest: 2023-10-02 11:53:58.703145377 +0000 UTC Remote: 2023-10-02 11:53:58.624490602 +0000 UTC m=+284.515069275 (delta=78.654775ms)
	I1002 11:53:58.755092  384505 fix.go:190] guest clock delta is within tolerance: 78.654775ms
	I1002 11:53:58.755098  384505 start.go:83] releasing machines lock for "old-k8s-version-749860", held for 19.887910329s
	I1002 11:53:58.755126  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.755438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:58.758172  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758431  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.758467  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758673  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759288  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759466  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759560  384505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:53:58.759620  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.759717  384505 ssh_runner.go:195] Run: cat /version.json
	I1002 11:53:58.759748  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.762471  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762618  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762847  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762879  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762911  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762943  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.763162  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763185  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763347  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763363  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763487  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763661  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.763671  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763828  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.880436  384505 ssh_runner.go:195] Run: systemctl --version
	I1002 11:53:58.886540  384505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:53:59.035347  384505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:53:59.041510  384505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:53:59.041604  384505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:53:59.056030  384505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:53:59.056062  384505 start.go:469] detecting cgroup driver to use...
	I1002 11:53:59.056147  384505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:53:59.068680  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:53:59.080770  384505 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:53:59.080823  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:53:59.093059  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:53:59.106603  384505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:53:59.223135  384505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:53:59.364085  384505 docker.go:213] disabling docker service ...
	I1002 11:53:59.364161  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:53:59.378131  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:53:59.390380  384505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:53:59.522236  384505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:53:59.663336  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:53:59.677221  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:53:59.694283  384505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:53:59.694380  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.703409  384505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:53:59.703481  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.712316  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.721255  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.731204  384505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:53:59.741152  384505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:53:59.748978  384505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:53:59.749036  384505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:53:59.761692  384505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:53:59.770571  384505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:53:59.882809  384505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:00.046741  384505 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:00.046843  384505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:00.051911  384505 start.go:537] Will wait 60s for crictl version
	I1002 11:54:00.051988  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:00.055847  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:00.099999  384505 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:00.100084  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.155271  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.202213  384505 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1002 11:53:58.780030  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Start
	I1002 11:53:58.780201  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring networks are active...
	I1002 11:53:58.780857  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network default is active
	I1002 11:53:58.781206  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network mk-embed-certs-487027 is active
	I1002 11:53:58.781581  384787 main.go:141] libmachine: (embed-certs-487027) Getting domain xml...
	I1002 11:53:58.782269  384787 main.go:141] libmachine: (embed-certs-487027) Creating domain...
	I1002 11:54:00.079808  384787 main.go:141] libmachine: (embed-certs-487027) Waiting to get IP...
	I1002 11:54:00.080676  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.081052  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.081202  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.081070  385615 retry.go:31] will retry after 291.88616ms: waiting for machine to come up
	I1002 11:54:00.374941  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.375493  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.375526  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.375441  385615 retry.go:31] will retry after 315.924643ms: waiting for machine to come up
	I1002 11:54:00.693196  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.693804  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.693840  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.693754  385615 retry.go:31] will retry after 473.967353ms: waiting for machine to come up
	I1002 11:54:01.169616  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.170137  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.170168  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.170099  385615 retry.go:31] will retry after 490.884713ms: waiting for machine to come up
	I1002 11:54:01.662881  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.663427  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.663459  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.663380  385615 retry.go:31] will retry after 590.285109ms: waiting for machine to come up
	I1002 11:54:02.255409  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.256020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.256048  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.255956  385615 retry.go:31] will retry after 586.734935ms: waiting for machine to come up
	I1002 11:54:00.203709  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:54:00.206822  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207269  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:54:00.207308  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207533  384505 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:00.211596  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:00.224503  384505 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:54:00.224558  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:00.267915  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:00.267986  384505 ssh_runner.go:195] Run: which lz4
	I1002 11:54:00.272086  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:00.276281  384505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:00.276322  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1002 11:54:02.169153  384505 crio.go:444] Took 1.897111 seconds to copy over tarball
	I1002 11:54:02.169248  384505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:02.844615  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.845091  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.845129  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.845049  385615 retry.go:31] will retry after 765.906555ms: waiting for machine to come up
	I1002 11:54:03.612904  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:03.613374  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:03.613515  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:03.613306  385615 retry.go:31] will retry after 1.240249135s: waiting for machine to come up
	I1002 11:54:04.855370  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:04.855832  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:04.855858  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:04.855785  385615 retry.go:31] will retry after 1.741253702s: waiting for machine to come up
	I1002 11:54:06.599800  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:06.600279  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:06.600307  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:06.600221  385615 retry.go:31] will retry after 1.945988456s: waiting for machine to come up
	I1002 11:54:05.257359  384505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088072266s)
	I1002 11:54:05.257395  384505 crio.go:451] Took 3.088214 seconds to extract the tarball
	I1002 11:54:05.257408  384505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:05.296693  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:05.347131  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:05.347156  384505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:54:05.347231  384505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.347239  384505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.347291  384505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.347523  384505 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.347545  384505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.347590  384505 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:54:05.347712  384505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.347797  384505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349061  384505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.349109  384505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.349136  384505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.349165  384505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349072  384505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.349076  384505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.349075  384505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.349490  384505 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:54:05.494581  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.497665  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.499676  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.503426  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 11:54:05.504502  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.507776  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.511534  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.589967  384505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1002 11:54:05.590038  384505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.590101  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.653382  384505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1002 11:54:05.653450  384505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.653539  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674391  384505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1002 11:54:05.674430  384505 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 11:54:05.674447  384505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.674467  384505 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1002 11:54:05.674508  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674498  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674583  384505 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1002 11:54:05.674621  384505 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.674671  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.676359  384505 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1002 11:54:05.676390  384505 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.676425  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.680824  384505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1002 11:54:05.680858  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.680871  384505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.680894  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.680905  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.682827  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1002 11:54:05.690404  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.690496  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.690562  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.810224  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1002 11:54:05.840439  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1002 11:54:05.840472  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.840535  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:54:05.840544  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1002 11:54:05.840583  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1002 11:54:05.840643  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.840663  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1002 11:54:05.874997  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1002 11:54:05.875049  384505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1002 11:54:05.875079  384505 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.875136  384505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1002 11:54:06.317119  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:07.926701  384505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.609537315s)
	I1002 11:54:07.926715  384505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.051548545s)
	I1002 11:54:07.926786  384505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 11:54:07.926855  384505 cache_images.go:92] LoadImages completed in 2.579686998s
	W1002 11:54:07.926953  384505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1002 11:54:07.927077  384505 ssh_runner.go:195] Run: crio config
	I1002 11:54:07.991410  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:07.991433  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:07.991452  384505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:07.991473  384505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.82 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-749860 NodeName:old-k8s-version-749860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:54:07.991665  384505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-749860"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-749860
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.82:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:07.991752  384505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-749860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:07.991814  384505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1002 11:54:08.002239  384505 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:08.002313  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:08.012375  384505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1002 11:54:08.031554  384505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:08.050801  384505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1002 11:54:08.068326  384505 ssh_runner.go:195] Run: grep 192.168.83.82	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:08.072798  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:08.085261  384505 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860 for IP: 192.168.83.82
	I1002 11:54:08.085320  384505 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:08.085511  384505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:08.085555  384505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:08.085682  384505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/client.key
	I1002 11:54:08.085771  384505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key.bc78c23c
	I1002 11:54:08.085823  384505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key
	I1002 11:54:08.085973  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:08.086020  384505 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:08.086035  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:08.086071  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:08.086101  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:08.086163  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:08.086237  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:08.087038  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:08.111230  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:08.133515  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:08.157382  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:08.180186  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:08.210075  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:08.232068  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:08.253873  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:08.276866  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:08.300064  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:08.322265  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:08.346808  384505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:08.367194  384505 ssh_runner.go:195] Run: openssl version
	I1002 11:54:08.374709  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:08.389274  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395338  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395420  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.401338  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:08.412228  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:08.423293  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428146  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428213  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.434177  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:08.449342  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:08.463678  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468723  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468795  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.476711  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:08.492116  384505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:08.498510  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:08.504961  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:08.513012  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:08.520620  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:08.528578  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:08.534685  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:08.541262  384505 kubeadm.go:404] StartCluster: {Name:old-k8s-version-749860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:08.541401  384505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:08.541474  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:08.579821  384505 cri.go:89] found id: ""
	I1002 11:54:08.579899  384505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:08.590328  384505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:08.590359  384505 kubeadm.go:636] restartCluster start
	I1002 11:54:08.590419  384505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:08.600034  384505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.601660  384505 kubeconfig.go:92] found "old-k8s-version-749860" server: "https://192.168.83.82:8443"
	I1002 11:54:08.605641  384505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:08.615274  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.615340  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.630952  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.630979  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.631032  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.642433  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.547687  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:08.548295  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:08.548331  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:08.548238  385615 retry.go:31] will retry after 2.817726625s: waiting for machine to come up
	I1002 11:54:11.367346  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:11.367909  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:11.367943  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:11.367859  385615 retry.go:31] will retry after 3.066326625s: waiting for machine to come up
	I1002 11:54:09.142569  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.155937  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:09.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.642637  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.655230  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.142683  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.142769  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.155206  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.642757  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.642857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.659345  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.142860  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.142955  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.158336  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.642849  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.642934  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.658819  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.143538  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.143645  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.159984  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.642679  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.658031  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.143496  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.159279  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.643567  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.643659  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.657189  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.435299  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:14.435744  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:14.435777  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:14.435699  385615 retry.go:31] will retry after 3.446313194s: waiting for machine to come up
	I1002 11:54:19.007568  384965 start.go:369] acquired machines lock for "default-k8s-diff-port-777999" in 4m4.857829673s
	I1002 11:54:19.007726  384965 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:19.007735  384965 fix.go:54] fixHost starting: 
	I1002 11:54:19.008181  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:19.008225  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:19.025286  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1002 11:54:19.025755  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:19.026243  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:54:19.026265  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:19.026648  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:19.026869  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:19.027056  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:54:19.028773  384965 fix.go:102] recreateIfNeeded on default-k8s-diff-port-777999: state=Stopped err=<nil>
	I1002 11:54:19.028799  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	W1002 11:54:19.028984  384965 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:19.031466  384965 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-777999" ...
	I1002 11:54:19.033140  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Start
	I1002 11:54:19.033346  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring networks are active...
	I1002 11:54:19.034009  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network default is active
	I1002 11:54:19.034440  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network mk-default-k8s-diff-port-777999 is active
	I1002 11:54:19.034843  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Getting domain xml...
	I1002 11:54:19.035519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Creating domain...
	I1002 11:54:14.142550  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.142618  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.154742  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.643429  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.643522  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.656075  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.142577  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.142669  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.154422  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.643360  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.643450  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.655255  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.142806  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.142948  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.154896  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.643505  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.643581  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.655413  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.142981  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.143087  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.156411  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.642996  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.643100  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.656886  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.143481  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:18.143563  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:18.157184  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.616095  384505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:18.616128  384505 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:18.616142  384505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:18.616204  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:18.654952  384505 cri.go:89] found id: ""
	I1002 11:54:18.655033  384505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:18.674155  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:18.685052  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:18.685116  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695816  384505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695844  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:18.821270  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:17.886333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.886895  384787 main.go:141] libmachine: (embed-certs-487027) Found IP for machine: 192.168.72.147
	I1002 11:54:17.886926  384787 main.go:141] libmachine: (embed-certs-487027) Reserving static IP address...
	I1002 11:54:17.886947  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has current primary IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.887365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.887396  384787 main.go:141] libmachine: (embed-certs-487027) DBG | skip adding static IP to network mk-embed-certs-487027 - found existing host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"}
	I1002 11:54:17.887404  384787 main.go:141] libmachine: (embed-certs-487027) Reserved static IP address: 192.168.72.147
	I1002 11:54:17.887420  384787 main.go:141] libmachine: (embed-certs-487027) Waiting for SSH to be available...
	I1002 11:54:17.887437  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Getting to WaitForSSH function...
	I1002 11:54:17.889775  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890175  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.890214  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890410  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH client type: external
	I1002 11:54:17.890434  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa (-rw-------)
	I1002 11:54:17.890470  384787 main.go:141] libmachine: (embed-certs-487027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:17.890502  384787 main.go:141] libmachine: (embed-certs-487027) DBG | About to run SSH command:
	I1002 11:54:17.890514  384787 main.go:141] libmachine: (embed-certs-487027) DBG | exit 0
	I1002 11:54:17.974015  384787 main.go:141] libmachine: (embed-certs-487027) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:17.974444  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetConfigRaw
	I1002 11:54:17.975209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:17.977468  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.977798  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.977837  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.978016  384787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:54:17.978201  384787 machine.go:88] provisioning docker machine ...
	I1002 11:54:17.978220  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:17.978460  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978651  384787 buildroot.go:166] provisioning hostname "embed-certs-487027"
	I1002 11:54:17.978669  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:17.980872  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981298  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.981333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981395  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:17.981587  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981746  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981885  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:17.982020  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:17.982399  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:17.982413  384787 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-487027 && echo "embed-certs-487027" | sudo tee /etc/hostname
	I1002 11:54:18.103274  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-487027
	
	I1002 11:54:18.103311  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.106230  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106654  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.106709  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106847  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.107082  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107266  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107400  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.107589  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.108051  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.108081  384787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-487027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-487027/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-487027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:18.222398  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:18.222431  384787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:18.222453  384787 buildroot.go:174] setting up certificates
	I1002 11:54:18.222488  384787 provision.go:83] configureAuth start
	I1002 11:54:18.222500  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:18.222817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:18.225631  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.226150  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226262  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.228719  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229096  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.229130  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229268  384787 provision.go:138] copyHostCerts
	I1002 11:54:18.229336  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:18.229351  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:18.229399  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:18.229480  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:18.229492  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:18.229511  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:18.229563  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:18.229570  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:18.229586  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:18.229630  384787 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-487027 san=[192.168.72.147 192.168.72.147 localhost 127.0.0.1 minikube embed-certs-487027]
	I1002 11:54:18.296130  384787 provision.go:172] copyRemoteCerts
	I1002 11:54:18.296187  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:18.296212  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.298721  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299036  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.299059  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299181  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.299363  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.299479  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.299628  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.384449  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:18.406096  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:18.427407  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 11:54:18.448829  384787 provision.go:86] duration metric: configureAuth took 226.314252ms
	I1002 11:54:18.448858  384787 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:18.449065  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:18.449178  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.451995  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.452405  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452596  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.452786  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.452958  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.453077  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.453213  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.453571  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.453606  384787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:18.754879  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:18.754913  384787 machine.go:91] provisioned docker machine in 776.69782ms
	I1002 11:54:18.754927  384787 start.go:300] post-start starting for "embed-certs-487027" (driver="kvm2")
	I1002 11:54:18.754941  384787 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:18.754966  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:18.755361  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:18.755392  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.758184  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758644  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.758700  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758788  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.758981  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.759149  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.759414  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.847614  384787 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:18.851792  384787 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:18.851821  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:18.851911  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:18.852023  384787 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:18.852152  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:18.861415  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:18.883190  384787 start.go:303] post-start completed in 128.242372ms
	I1002 11:54:18.883222  384787 fix.go:56] fixHost completed within 20.127922888s
	I1002 11:54:18.883249  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.885771  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.886141  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886335  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.886598  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886784  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886922  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.887111  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.887556  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.887574  384787 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:19.007352  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247658.948838951
	
	I1002 11:54:19.007388  384787 fix.go:206] guest clock: 1696247658.948838951
	I1002 11:54:19.007404  384787 fix.go:219] Guest: 2023-10-02 11:54:18.948838951 +0000 UTC Remote: 2023-10-02 11:54:18.883226893 +0000 UTC m=+271.237550126 (delta=65.612058ms)
	I1002 11:54:19.007464  384787 fix.go:190] guest clock delta is within tolerance: 65.612058ms
	I1002 11:54:19.007471  384787 start.go:83] releasing machines lock for "embed-certs-487027", held for 20.25221392s
	I1002 11:54:19.007510  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.007831  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:19.011020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011386  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.011418  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011602  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012303  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012520  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012602  384787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:19.012660  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.012946  384787 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:19.012976  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.015652  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.015935  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016016  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016063  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016284  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016411  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016439  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016482  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016638  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016653  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.016868  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016871  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.017017  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.017199  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.124634  384787 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:19.130340  384787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:19.278814  384787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:19.284549  384787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:19.284618  384787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:19.300872  384787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:19.300896  384787 start.go:469] detecting cgroup driver to use...
	I1002 11:54:19.300984  384787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:19.314898  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:19.327762  384787 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:19.327826  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:19.341164  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:19.354542  384787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:19.469125  384787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:19.581195  384787 docker.go:213] disabling docker service ...
	I1002 11:54:19.581260  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:19.595222  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:19.607587  384787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:19.725376  384787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:19.828507  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:19.845782  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:19.868464  384787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:19.868530  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.881554  384787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:19.881633  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.894090  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.905922  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.918336  384787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:19.931259  384787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:19.939861  384787 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:19.939925  384787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:19.954089  384787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:19.966438  384787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:20.124666  384787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:20.329505  384787 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:20.329602  384787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:20.336428  384787 start.go:537] Will wait 60s for crictl version
	I1002 11:54:20.336499  384787 ssh_runner.go:195] Run: which crictl
	I1002 11:54:20.343269  384787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:20.386249  384787 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:20.386331  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.429634  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.476699  384787 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:20.478035  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:20.480720  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481028  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:20.481054  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481230  384787 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:20.485387  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:20.496957  384787 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:20.497028  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:20.539655  384787 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:20.539731  384787 ssh_runner.go:195] Run: which lz4
	I1002 11:54:20.543869  384787 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:20.548080  384787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:20.548112  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:22.411067  384787 crio.go:444] Took 1.867223 seconds to copy over tarball
	I1002 11:54:22.411155  384787 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:20.416319  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting to get IP...
	I1002 11:54:20.417168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417613  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.417539  385761 retry.go:31] will retry after 211.341658ms: waiting for machine to come up
	I1002 11:54:20.631097  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.631841  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.632011  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.631972  385761 retry.go:31] will retry after 257.651992ms: waiting for machine to come up
	I1002 11:54:20.891519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892077  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892111  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.892047  385761 retry.go:31] will retry after 295.599576ms: waiting for machine to come up
	I1002 11:54:21.189739  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190333  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190389  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.190275  385761 retry.go:31] will retry after 532.182463ms: waiting for machine to come up
	I1002 11:54:21.723822  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724414  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724443  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.724314  385761 retry.go:31] will retry after 576.235756ms: waiting for machine to come up
	I1002 11:54:22.301975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302566  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302600  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:22.302479  385761 retry.go:31] will retry after 913.441142ms: waiting for machine to come up
	I1002 11:54:23.217419  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217905  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217943  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:23.217839  385761 retry.go:31] will retry after 1.089960204s: waiting for machine to come up
	I1002 11:54:19.625761  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.857853  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.977490  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:20.080170  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:20.080294  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.097093  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.611090  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.110857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.610499  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.111420  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.138171  384505 api_server.go:72] duration metric: took 2.057999603s to wait for apiserver process to appear ...
	I1002 11:54:22.138201  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:22.138224  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:25.604442  384787 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193244457s)
	I1002 11:54:25.604543  384787 crio.go:451] Took 3.193443 seconds to extract the tarball
	I1002 11:54:25.604568  384787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:25.660515  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:25.723308  384787 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:25.723339  384787 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:25.723436  384787 ssh_runner.go:195] Run: crio config
	I1002 11:54:25.781690  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:25.781722  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:25.781748  384787 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:25.781775  384787 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-487027 NodeName:embed-certs-487027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:25.782020  384787 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-487027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:25.782125  384787 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-487027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:25.782183  384787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:25.791322  384787 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:25.791398  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:25.799709  384787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 11:54:25.818900  384787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:25.836913  384787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1002 11:54:25.856201  384787 ssh_runner.go:195] Run: grep 192.168.72.147	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:25.859962  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:25.872776  384787 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027 for IP: 192.168.72.147
	I1002 11:54:25.872818  384787 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:25.873061  384787 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:25.873125  384787 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:25.873225  384787 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/client.key
	I1002 11:54:25.873312  384787 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key.b24df18b
	I1002 11:54:25.873375  384787 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key
	I1002 11:54:25.873530  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:25.873590  384787 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:25.873602  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:25.873633  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:25.873667  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:25.873702  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:25.873757  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:25.874732  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:25.901588  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:25.929381  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:25.955358  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:25.980414  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:26.008652  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:26.038061  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:26.067828  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:26.098717  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:26.131030  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:26.162989  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:26.189458  384787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:26.206791  384787 ssh_runner.go:195] Run: openssl version
	I1002 11:54:26.214436  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:26.226064  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231428  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231504  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.238070  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:26.252779  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:26.267263  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272245  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272316  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.278088  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:26.289430  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:26.300788  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305731  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305812  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.311712  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:26.322855  384787 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:26.328688  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:26.336570  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:26.344412  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:26.350583  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:26.356815  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:26.364674  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:26.372219  384787 kubeadm.go:404] StartCluster: {Name:embed-certs-487027 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:26.372341  384787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:26.372397  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:26.424018  384787 cri.go:89] found id: ""
	I1002 11:54:26.424131  384787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:26.435493  384787 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:26.435520  384787 kubeadm.go:636] restartCluster start
	I1002 11:54:26.435583  384787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:26.447429  384787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.448848  384787 kubeconfig.go:92] found "embed-certs-487027" server: "https://192.168.72.147:8443"
	I1002 11:54:26.452474  384787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:26.462854  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.462924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.475723  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.475751  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.475803  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.488962  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.989693  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.989776  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.002889  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:27.489487  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.489589  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.503912  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:24.308867  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309362  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:24.309326  385761 retry.go:31] will retry after 1.381170872s: waiting for machine to come up
	I1002 11:54:25.691931  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692285  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692386  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:25.692267  385761 retry.go:31] will retry after 1.748966707s: waiting for machine to come up
	I1002 11:54:27.442708  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443145  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443171  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:27.443107  385761 retry.go:31] will retry after 2.105420589s: waiting for machine to come up
	I1002 11:54:27.138701  384505 api_server.go:269] stopped: https://192.168.83.82:8443/healthz: Get "https://192.168.83.82:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 11:54:27.138757  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.249499  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:28.249540  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:28.750389  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.756351  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:28.756390  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.250308  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.257228  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:29.257264  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.750123  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.758475  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 11:54:29.769049  384505 api_server.go:141] control plane version: v1.16.0
	I1002 11:54:29.769079  384505 api_server.go:131] duration metric: took 7.630868963s to wait for apiserver health ...
	I1002 11:54:29.769098  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:29.769107  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:29.770969  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:27.989735  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.989861  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.007059  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.489495  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.489605  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.505845  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.989879  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.989963  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.004220  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.489847  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.489949  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.502986  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.989170  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.989264  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.006850  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.489389  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.489504  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.502094  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.989302  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.989399  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.005902  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.489967  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.490080  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.503748  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.989317  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.989405  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.003288  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:32.489803  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.489924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.506744  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.550027  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550550  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:29.550488  385761 retry.go:31] will retry after 2.509962026s: waiting for machine to come up
	I1002 11:54:32.063392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063862  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063887  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:32.063834  385761 retry.go:31] will retry after 2.845339865s: waiting for machine to come up
	I1002 11:54:29.772611  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:29.786551  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:29.807894  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:29.818837  384505 system_pods.go:59] 7 kube-system pods found
	I1002 11:54:29.818890  384505 system_pods.go:61] "coredns-5644d7b6d9-9xdpq" [2d10c772-e2f0-4bfc-9795-0721f8bab31c] Running
	I1002 11:54:29.818901  384505 system_pods.go:61] "etcd-old-k8s-version-749860" [5826895a-f14d-43ab-9f22-edad964d4a8e] Running
	I1002 11:54:29.818910  384505 system_pods.go:61] "kube-apiserver-old-k8s-version-749860" [3418ba32-aa28-4587-a231-b1f218181e71] Running
	I1002 11:54:29.818919  384505 system_pods.go:61] "kube-controller-manager-old-k8s-version-749860" [e42ff4c0-2ec4-45b9-8189-6a225c79f5c6] Running
	I1002 11:54:29.818927  384505 system_pods.go:61] "kube-proxy-gkhxb" [b3675678-e1cf-4d86-82d9-9e068bd1ba19] Running
	I1002 11:54:29.818939  384505 system_pods.go:61] "kube-scheduler-old-k8s-version-749860" [53a1c8a7-ec6d-4d47-a980-8cfab71ad467] Running
	I1002 11:54:29.818948  384505 system_pods.go:61] "storage-provisioner" [e73d6f24-1392-40ca-b37d-03c035734d1d] Running
	I1002 11:54:29.818964  384505 system_pods.go:74] duration metric: took 11.044895ms to wait for pod list to return data ...
	I1002 11:54:29.818980  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:29.822392  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:29.822455  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:29.822472  384505 node_conditions.go:105] duration metric: took 3.48317ms to run NodePressure ...
	I1002 11:54:29.822520  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:30.106960  384505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:30.111692  384505 retry.go:31] will retry after 218.727225ms: kubelet not initialised
	I1002 11:54:30.336456  384505 retry.go:31] will retry after 524.868139ms: kubelet not initialised
	I1002 11:54:30.867554  384505 retry.go:31] will retry after 427.897694ms: kubelet not initialised
	I1002 11:54:31.301616  384505 retry.go:31] will retry after 722.780158ms: kubelet not initialised
	I1002 11:54:32.029512  384505 retry.go:31] will retry after 1.205429819s: kubelet not initialised
	I1002 11:54:33.253735  384505 retry.go:31] will retry after 1.476521325s: kubelet not initialised
	I1002 11:54:32.989607  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.989718  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.004745  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.489141  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.489215  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.506018  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.990120  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.990217  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.005050  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.489520  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.489608  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.501965  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.989481  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.989584  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.002635  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.489123  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.489199  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.502995  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.989474  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.989565  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:36.003010  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:36.463582  384787 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:36.463614  384787 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:36.463628  384787 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:36.463689  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:36.503915  384787 cri.go:89] found id: ""
	I1002 11:54:36.503982  384787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:36.519603  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:36.529026  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:36.529086  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538424  384787 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538451  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:36.670492  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:34.910513  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911092  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:34.911030  385761 retry.go:31] will retry after 3.250805502s: waiting for machine to come up
	I1002 11:54:38.163585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Found IP for machine: 192.168.61.251
	I1002 11:54:38.164104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has current primary IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164124  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserving static IP address...
	I1002 11:54:38.164549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.164588  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | skip adding static IP to network mk-default-k8s-diff-port-777999 - found existing host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"}
	I1002 11:54:38.164604  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserved static IP address: 192.168.61.251
	I1002 11:54:38.164623  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for SSH to be available...
	I1002 11:54:38.164639  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Getting to WaitForSSH function...
	I1002 11:54:38.166901  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167279  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.167313  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167579  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH client type: external
	I1002 11:54:38.167610  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa (-rw-------)
	I1002 11:54:38.167649  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:38.167671  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | About to run SSH command:
	I1002 11:54:38.167694  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | exit 0
	I1002 11:54:38.274617  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:38.275081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetConfigRaw
	I1002 11:54:38.275836  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.278750  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279150  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.279193  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279391  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:54:38.279621  384965 machine.go:88] provisioning docker machine ...
	I1002 11:54:38.279646  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:38.279886  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280069  384965 buildroot.go:166] provisioning hostname "default-k8s-diff-port-777999"
	I1002 11:54:38.280094  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280253  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.282736  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.283136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283230  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.283399  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283578  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283733  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.283892  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.284295  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.284312  384965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-777999 && echo "default-k8s-diff-port-777999" | sudo tee /etc/hostname
	I1002 11:54:38.443082  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-777999
	
	I1002 11:54:38.443200  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.446493  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447061  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.447106  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447288  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.447549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447737  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447899  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.448132  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.448554  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.448586  384965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-777999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-777999/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-777999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:38.594884  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:38.594920  384965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:38.594956  384965 buildroot.go:174] setting up certificates
	I1002 11:54:38.594975  384965 provision.go:83] configureAuth start
	I1002 11:54:38.594993  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.595325  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.597718  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598053  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.598088  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598217  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.600751  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.601099  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601219  384965 provision.go:138] copyHostCerts
	I1002 11:54:38.601300  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:38.601316  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:38.601393  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:38.601520  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:38.601534  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:38.601565  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:38.601634  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:38.601644  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:38.601670  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:38.601728  384965 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-777999 san=[192.168.61.251 192.168.61.251 localhost 127.0.0.1 minikube default-k8s-diff-port-777999]
	I1002 11:54:38.706714  384965 provision.go:172] copyRemoteCerts
	I1002 11:54:38.706783  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:38.706847  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.709075  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709491  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.709547  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709658  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.709903  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.710087  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.710216  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:38.803103  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:38.825916  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:38.847881  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 11:54:38.873772  384965 provision.go:86] duration metric: configureAuth took 278.777931ms
	I1002 11:54:38.873804  384965 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:38.874066  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:38.874154  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.876864  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877269  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.877304  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877453  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.877666  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877797  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877936  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.878087  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.878441  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.878469  384965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:34.736594  384505 retry.go:31] will retry after 1.866771295s: kubelet not initialised
	I1002 11:54:36.609977  384505 retry.go:31] will retry after 4.83087592s: kubelet not initialised
	I1002 11:54:39.495298  384344 start.go:369] acquired machines lock for "no-preload-304121" in 55.626389891s
	I1002 11:54:39.495355  384344 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:39.495364  384344 fix.go:54] fixHost starting: 
	I1002 11:54:39.495800  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:39.495839  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:39.518491  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1002 11:54:39.518893  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:39.519407  384344 main.go:141] libmachine: Using API Version  1
	I1002 11:54:39.519432  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:39.519757  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:39.519941  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:39.520099  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:54:39.521857  384344 fix.go:102] recreateIfNeeded on no-preload-304121: state=Stopped err=<nil>
	I1002 11:54:39.521885  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	W1002 11:54:39.522058  384344 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:39.524119  384344 out.go:177] * Restarting existing kvm2 VM for "no-preload-304121" ...
	I1002 11:54:39.215761  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:39.215794  384965 machine.go:91] provisioned docker machine in 936.155542ms
	I1002 11:54:39.215807  384965 start.go:300] post-start starting for "default-k8s-diff-port-777999" (driver="kvm2")
	I1002 11:54:39.215822  384965 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:39.215848  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.216265  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:39.216305  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.219032  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219387  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.219418  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219542  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.219748  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.219910  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.220054  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.317075  384965 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:39.321405  384965 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:39.321429  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:39.321505  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:39.321599  384965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:39.321716  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:39.330980  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:39.357830  384965 start.go:303] post-start completed in 142.005546ms
	I1002 11:54:39.357863  384965 fix.go:56] fixHost completed within 20.350127508s
	I1002 11:54:39.357900  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.360232  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.360598  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360768  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.360966  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361139  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361264  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.361425  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:39.361918  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:39.361939  384965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:39.495129  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247679.435720520
	
	I1002 11:54:39.495155  384965 fix.go:206] guest clock: 1696247679.435720520
	I1002 11:54:39.495166  384965 fix.go:219] Guest: 2023-10-02 11:54:39.43572052 +0000 UTC Remote: 2023-10-02 11:54:39.357871423 +0000 UTC m=+265.343763085 (delta=77.849097ms)
	I1002 11:54:39.495194  384965 fix.go:190] guest clock delta is within tolerance: 77.849097ms
	I1002 11:54:39.495206  384965 start.go:83] releasing machines lock for "default-k8s-diff-port-777999", held for 20.487515438s
	I1002 11:54:39.495242  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.495652  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:39.498667  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499055  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.499114  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499370  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.499891  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500060  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500132  384965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:39.500199  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.500539  384965 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:39.500565  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.503388  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503580  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503885  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.503917  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503995  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504000  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.504081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.504281  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504297  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504682  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504680  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.504825  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.623582  384965 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:39.631181  384965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:39.787298  384965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:39.795202  384965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:39.795303  384965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:39.816471  384965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:39.816495  384965 start.go:469] detecting cgroup driver to use...
	I1002 11:54:39.816567  384965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:39.836594  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:39.852798  384965 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:39.852911  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:39.868676  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:39.885480  384965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:40.003441  384965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:40.146812  384965 docker.go:213] disabling docker service ...
	I1002 11:54:40.146916  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:40.163451  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:40.178327  384965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:40.339579  384965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:40.463502  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:40.476402  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:40.499021  384965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:40.499117  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.511680  384965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:40.511752  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.524364  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.536675  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.549326  384965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:40.559447  384965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:40.570086  384965 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:40.570157  384965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:40.582938  384965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:40.594250  384965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:40.739528  384965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:40.964248  384965 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:40.964336  384965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:40.969637  384965 start.go:537] Will wait 60s for crictl version
	I1002 11:54:40.969696  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:54:40.974270  384965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:41.016986  384965 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:41.017121  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.061313  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.112139  384965 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:39.525634  384344 main.go:141] libmachine: (no-preload-304121) Calling .Start
	I1002 11:54:39.525802  384344 main.go:141] libmachine: (no-preload-304121) Ensuring networks are active...
	I1002 11:54:39.526566  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network default is active
	I1002 11:54:39.526860  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network mk-no-preload-304121 is active
	I1002 11:54:39.527227  384344 main.go:141] libmachine: (no-preload-304121) Getting domain xml...
	I1002 11:54:39.527942  384344 main.go:141] libmachine: (no-preload-304121) Creating domain...
	I1002 11:54:40.973483  384344 main.go:141] libmachine: (no-preload-304121) Waiting to get IP...
	I1002 11:54:40.974731  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:40.975262  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:40.975359  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:40.975266  385933 retry.go:31] will retry after 231.149062ms: waiting for machine to come up
	I1002 11:54:41.207806  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.208486  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.208522  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.208461  385933 retry.go:31] will retry after 390.353931ms: waiting for machine to come up
	I1002 11:54:37.939830  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269286101s)
	I1002 11:54:37.939876  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.149675  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.246179  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.327794  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:38.327884  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.343240  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.855719  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.355428  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.854862  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.355228  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.855597  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.891530  384787 api_server.go:72] duration metric: took 2.563733499s to wait for apiserver process to appear ...
	I1002 11:54:40.891560  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:40.891581  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892226  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:40.892274  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892799  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:41.393747  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:41.113638  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:41.116930  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117360  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:41.117396  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117684  384965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:41.122622  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:41.138418  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:41.138496  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:41.189380  384965 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:41.189465  384965 ssh_runner.go:195] Run: which lz4
	I1002 11:54:41.194945  384965 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:41.200215  384965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:41.200254  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:43.164279  384965 crio.go:444] Took 1.969380 seconds to copy over tarball
	I1002 11:54:43.164370  384965 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:41.447247  384505 retry.go:31] will retry after 8.441231321s: kubelet not initialised
	I1002 11:54:41.600866  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.601691  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.601729  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.601345  385933 retry.go:31] will retry after 381.859851ms: waiting for machine to come up
	I1002 11:54:41.985107  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.986545  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.986572  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.986434  385933 retry.go:31] will retry after 606.51751ms: waiting for machine to come up
	I1002 11:54:42.594443  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:42.595004  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:42.595031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:42.594935  385933 retry.go:31] will retry after 474.689172ms: waiting for machine to come up
	I1002 11:54:43.071618  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:43.072140  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:43.072196  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:43.072085  385933 retry.go:31] will retry after 931.163736ms: waiting for machine to come up
	I1002 11:54:44.005228  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:44.005899  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:44.005927  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:44.005852  385933 retry.go:31] will retry after 1.133426769s: waiting for machine to come up
	I1002 11:54:45.141320  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:45.142068  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:45.142099  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:45.141965  385933 retry.go:31] will retry after 1.458717431s: waiting for machine to come up
	I1002 11:54:45.416658  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.416697  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.416713  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.489874  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.489918  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.893115  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.901437  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:45.901477  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.393114  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.399302  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:46.399337  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.892875  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.898524  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:54:46.908311  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:54:46.908342  384787 api_server.go:131] duration metric: took 6.016772427s to wait for apiserver health ...
	I1002 11:54:46.908354  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.908364  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:47.225292  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:47.481617  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:47.499011  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:47.535238  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:46.620757  384965 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.456345361s)
	I1002 11:54:46.620801  384965 crio.go:451] Took 3.456492 seconds to extract the tarball
	I1002 11:54:46.620814  384965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:46.677550  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:46.810235  384965 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:46.810265  384965 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:46.810334  384965 ssh_runner.go:195] Run: crio config
	I1002 11:54:46.875355  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.875378  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:46.875397  384965 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:46.875417  384965 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.251 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-777999 NodeName:default-k8s-diff-port-777999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:46.875588  384965 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.251
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-777999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:46.875674  384965 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-777999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 11:54:46.875737  384965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:46.886943  384965 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:46.887034  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:46.898434  384965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1002 11:54:46.917830  384965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:46.936297  384965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1002 11:54:46.954413  384965 ssh_runner.go:195] Run: grep 192.168.61.251	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:46.958832  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:46.970802  384965 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999 for IP: 192.168.61.251
	I1002 11:54:46.970845  384965 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:46.971031  384965 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:46.971093  384965 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:46.971194  384965 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/client.key
	I1002 11:54:46.971286  384965 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key.04d51ca9
	I1002 11:54:46.971341  384965 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key
	I1002 11:54:46.971469  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:46.971507  384965 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:46.971524  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:46.971572  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:46.971614  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:46.971652  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:46.971713  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:46.972319  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:46.998880  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:47.024639  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:47.048695  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:47.076815  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:47.102469  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:47.128913  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:47.155863  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:47.185058  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:47.212289  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:47.236848  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:47.261485  384965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:47.278535  384965 ssh_runner.go:195] Run: openssl version
	I1002 11:54:47.284888  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:47.296352  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301262  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301331  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.307136  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:47.317650  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:47.328371  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333341  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333421  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.339268  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:47.349646  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:47.360575  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367279  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367346  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.374693  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:47.386302  384965 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:47.391448  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:47.397407  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:47.403122  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:47.408810  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:47.414684  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:47.420606  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:47.426568  384965 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:47.426702  384965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:47.426747  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:47.467190  384965 cri.go:89] found id: ""
	I1002 11:54:47.467275  384965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:47.478921  384965 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:47.478944  384965 kubeadm.go:636] restartCluster start
	I1002 11:54:47.479016  384965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:47.492971  384965 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.494091  384965 kubeconfig.go:92] found "default-k8s-diff-port-777999" server: "https://192.168.61.251:8444"
	I1002 11:54:47.498738  384965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:47.510376  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.510454  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.523397  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.523417  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.523459  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.536893  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.037653  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.037746  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.055280  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.537887  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.537979  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.555759  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.037998  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.038108  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:46.602496  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:46.654672  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:46.654707  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:46.602962  385933 retry.go:31] will retry after 1.25268648s: waiting for machine to come up
	I1002 11:54:47.857506  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:47.858115  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:47.858149  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:47.858061  385933 retry.go:31] will retry after 2.104571101s: waiting for machine to come up
	I1002 11:54:49.964533  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:49.964997  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:49.965031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:49.964942  385933 retry.go:31] will retry after 2.047553587s: waiting for machine to come up
	I1002 11:54:47.766443  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:54:47.766485  384787 system_pods.go:61] "coredns-5dd5756b68-6glsj" [ad7c852a-cdac-4ada-99da-4115b447f00c] Running
	I1002 11:54:47.766498  384787 system_pods.go:61] "etcd-embed-certs-487027" [78f5c4ed-7baf-4339-811f-c25e934de0c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:54:47.766516  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [275bb65c-b955-43d9-839b-6439e8c19662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:54:47.766524  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [d798407e-abe2-4b70-952e-1274fff006bc] Running
	I1002 11:54:47.766532  384787 system_pods.go:61] "kube-proxy-wjjtv" [54e35e5e-7045-497f-8fef-322fe0e43afd] Running
	I1002 11:54:47.766543  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [62c61cf2-f18e-47a9-9729-20e87fe02c89] Running
	I1002 11:54:47.766556  384787 system_pods.go:61] "metrics-server-57f55c9bc5-d8c7b" [71c33b74-c942-403a-a1d4-2b852f0070a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:54:47.766568  384787 system_pods.go:61] "storage-provisioner" [0a8120e1-c879-4726-abab-f95a4a3c8721] Running
	I1002 11:54:47.766581  384787 system_pods.go:74] duration metric: took 231.314062ms to wait for pod list to return data ...
	I1002 11:54:47.766593  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:48.206673  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:48.206710  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:48.206722  384787 node_conditions.go:105] duration metric: took 440.12142ms to run NodePressure ...
	I1002 11:54:48.206743  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:48.736269  384787 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754061  384787 kubeadm.go:787] kubelet initialised
	I1002 11:54:48.754094  384787 kubeadm.go:788] duration metric: took 17.795803ms waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754106  384787 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:54:48.763480  384787 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:50.815900  384787 pod_ready.go:102] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:51.815729  384787 pod_ready.go:92] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:51.815752  384787 pod_ready.go:81] duration metric: took 3.052241738s waiting for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:51.815761  384787 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:49.055614  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.537412  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.537517  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:49.554838  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.037334  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.037460  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.050213  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.537454  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.537586  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.551733  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.037281  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.037394  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.055077  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.537591  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.537672  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.555315  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.037929  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.038038  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.052852  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.537358  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.537435  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.553169  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.037814  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.037913  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.055176  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.537764  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.537869  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.554864  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.037941  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.038052  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:49.895219  384505 retry.go:31] will retry after 9.020637322s: kubelet not initialised
	I1002 11:54:52.015240  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:52.015623  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:52.015646  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:52.015594  385933 retry.go:31] will retry after 3.361214112s: waiting for machine to come up
	I1002 11:54:55.378293  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:55.378805  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:55.378853  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:55.378772  385933 retry.go:31] will retry after 3.33521217s: waiting for machine to come up
	I1002 11:54:53.337930  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.337967  384787 pod_ready.go:81] duration metric: took 1.522199476s waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.337979  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344756  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.344782  384787 pod_ready.go:81] duration metric: took 6.79552ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344791  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:55.549698  384787 pod_ready.go:102] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:57.049146  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.049177  384787 pod_ready.go:81] duration metric: took 3.704379238s waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.049192  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055125  384787 pod_ready.go:92] pod "kube-proxy-wjjtv" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.055144  384787 pod_ready.go:81] duration metric: took 5.945156ms waiting for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055152  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:54.056234  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.537821  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.537918  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:54.552634  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.037141  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.037220  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.052963  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.537432  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.537531  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.552525  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.036986  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.037074  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.049750  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.537060  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.537144  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.548686  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.037931  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:57.038029  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:57.049828  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.511461  384965 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:57.511495  384965 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:57.511510  384965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:57.511571  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:57.552784  384965 cri.go:89] found id: ""
	I1002 11:54:57.552866  384965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:57.567867  384965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:57.578391  384965 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:57.578474  384965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587065  384965 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587086  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:57.717787  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.423038  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.607300  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.687023  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.778674  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:58.778770  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.794920  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.923574  384505 retry.go:31] will retry after 19.662203801s: kubelet not initialised
	I1002 11:54:58.715622  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716211  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has current primary IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716229  384344 main.go:141] libmachine: (no-preload-304121) Found IP for machine: 192.168.39.143
	I1002 11:54:58.716248  384344 main.go:141] libmachine: (no-preload-304121) Reserving static IP address...
	I1002 11:54:58.716781  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.716823  384344 main.go:141] libmachine: (no-preload-304121) Reserved static IP address: 192.168.39.143
	I1002 11:54:58.716845  384344 main.go:141] libmachine: (no-preload-304121) DBG | skip adding static IP to network mk-no-preload-304121 - found existing host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"}
	I1002 11:54:58.716864  384344 main.go:141] libmachine: (no-preload-304121) DBG | Getting to WaitForSSH function...
	I1002 11:54:58.716875  384344 main.go:141] libmachine: (no-preload-304121) Waiting for SSH to be available...
	I1002 11:54:58.719551  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.719991  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.720031  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.720236  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH client type: external
	I1002 11:54:58.720273  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa (-rw-------)
	I1002 11:54:58.720309  384344 main.go:141] libmachine: (no-preload-304121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:58.720329  384344 main.go:141] libmachine: (no-preload-304121) DBG | About to run SSH command:
	I1002 11:54:58.720355  384344 main.go:141] libmachine: (no-preload-304121) DBG | exit 0
	I1002 11:54:58.866583  384344 main.go:141] libmachine: (no-preload-304121) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:58.866916  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetConfigRaw
	I1002 11:54:58.867637  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:58.870844  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871270  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.871305  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871677  384344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:54:58.871886  384344 machine.go:88] provisioning docker machine ...
	I1002 11:54:58.871906  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:58.872159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872343  384344 buildroot.go:166] provisioning hostname "no-preload-304121"
	I1002 11:54:58.872370  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:58.875795  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876215  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.876252  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876420  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:58.876592  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876766  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876935  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:58.877113  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:58.877512  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:58.877528  384344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-304121 && echo "no-preload-304121" | sudo tee /etc/hostname
	I1002 11:54:59.032306  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-304121
	
	I1002 11:54:59.032336  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.035842  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036373  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.036412  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036749  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.036953  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037145  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037313  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.037564  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.038035  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.038064  384344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-304121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-304121/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-304121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:59.175880  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:59.175910  384344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:59.175933  384344 buildroot.go:174] setting up certificates
	I1002 11:54:59.175945  384344 provision.go:83] configureAuth start
	I1002 11:54:59.175957  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:59.176253  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:59.179169  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179541  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.179577  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179797  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.182011  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182418  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.182451  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182653  384344 provision.go:138] copyHostCerts
	I1002 11:54:59.182718  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:59.182732  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:59.182807  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:59.182919  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:59.182931  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:59.182963  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:59.183050  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:59.183060  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:59.183088  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:59.183174  384344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.no-preload-304121 san=[192.168.39.143 192.168.39.143 localhost 127.0.0.1 minikube no-preload-304121]
	I1002 11:54:59.492171  384344 provision.go:172] copyRemoteCerts
	I1002 11:54:59.492239  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:59.492266  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.495249  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495698  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.495746  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495900  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.496143  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.496299  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.496460  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:54:59.594538  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 11:54:59.625319  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:54:59.652745  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:59.676895  384344 provision.go:86] duration metric: configureAuth took 500.931279ms
	I1002 11:54:59.676930  384344 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:59.677160  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:59.677259  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.680393  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.680730  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.680764  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.681190  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.681491  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681698  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681875  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.682112  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.682651  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.682684  384344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:55:00.029184  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:55:00.029213  384344 machine.go:91] provisioned docker machine in 1.157312136s
	I1002 11:55:00.029226  384344 start.go:300] post-start starting for "no-preload-304121" (driver="kvm2")
	I1002 11:55:00.029240  384344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:55:00.029296  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.029683  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:55:00.029722  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.032977  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033456  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.033488  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033677  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.033919  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.034136  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.034351  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.137946  384344 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:55:00.144169  384344 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:55:00.144209  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:55:00.144291  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:55:00.144405  384344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:55:00.144609  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:55:00.157898  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:00.186547  384344 start.go:303] post-start completed in 157.300734ms
	I1002 11:55:00.186580  384344 fix.go:56] fixHost completed within 20.691216247s
	I1002 11:55:00.186609  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.189905  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190374  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.190411  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190718  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.190940  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.191494  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:55:00.191981  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:55:00.191996  384344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:55:00.328123  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247700.270150690
	
	I1002 11:55:00.328155  384344 fix.go:206] guest clock: 1696247700.270150690
	I1002 11:55:00.328166  384344 fix.go:219] Guest: 2023-10-02 11:55:00.27015069 +0000 UTC Remote: 2023-10-02 11:55:00.186584697 +0000 UTC m=+358.877281851 (delta=83.565993ms)
	I1002 11:55:00.328193  384344 fix.go:190] guest clock delta is within tolerance: 83.565993ms
	I1002 11:55:00.328207  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 20.832874678s
	I1002 11:55:00.328234  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.328584  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:00.331898  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332432  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.332468  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332651  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333263  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333480  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333586  384344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:55:00.333647  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.333895  384344 ssh_runner.go:195] Run: cat /version.json
	I1002 11:55:00.333943  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.336673  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.336920  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337021  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337083  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337207  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337399  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.337487  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337518  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.337642  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337734  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.337835  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.338131  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.338307  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.427708  384344 ssh_runner.go:195] Run: systemctl --version
	I1002 11:55:00.456367  384344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:55:00.604389  384344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:55:00.612859  384344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:55:00.612968  384344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:55:00.627986  384344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:55:00.628056  384344 start.go:469] detecting cgroup driver to use...
	I1002 11:55:00.628128  384344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:55:00.643670  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:55:00.656987  384344 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:55:00.657058  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:55:00.669708  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:55:00.682586  384344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:55:00.790044  384344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:55:00.913634  384344 docker.go:213] disabling docker service ...
	I1002 11:55:00.913717  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:55:00.926496  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:55:00.938769  384344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:55:01.045413  384344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:55:01.169133  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:55:01.182168  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:55:01.201850  384344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:55:01.201926  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.214874  384344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:55:01.214972  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.225123  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.237560  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.247898  384344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:55:01.260797  384344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:55:01.271528  384344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:55:01.271602  384344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:55:01.285906  384344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:55:01.297623  384344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:55:01.429828  384344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:55:01.617340  384344 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:55:01.617486  384344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:55:01.622871  384344 start.go:537] Will wait 60s for crictl version
	I1002 11:55:01.622942  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:01.627257  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:55:01.674032  384344 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:55:01.674130  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.726822  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.777433  384344 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:59.549254  384787 pod_ready.go:102] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:01.550493  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:01.550524  384787 pod_ready.go:81] duration metric: took 4.495364436s waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:01.550537  384787 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:59.310529  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:59.811582  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.310859  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.810518  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.311217  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.336761  384965 api_server.go:72] duration metric: took 2.55808678s to wait for apiserver process to appear ...
	I1002 11:55:01.336793  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:01.336814  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:01.778891  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:01.781741  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782048  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:01.782088  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782334  384344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:55:01.787047  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:01.803390  384344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:55:01.803482  384344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:55:01.853839  384344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:55:01.853868  384344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:55:01.853954  384344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.853966  384344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.854164  384344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.854189  384344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.854254  384344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.854169  384344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:01.854325  384344 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 11:55:01.854171  384344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855315  384344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855339  384344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.855355  384344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.855841  384344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.855856  384344 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.855815  384344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001299  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.002150  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1002 11:55:02.004275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.007591  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.028882  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.199630  384344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1002 11:55:02.199751  384344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.199678  384344 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1002 11:55:02.199838  384344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.199866  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199890  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199707  384344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1002 11:55:02.199951  384344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.199981  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305560  384344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1002 11:55:02.305618  384344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.305670  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305721  384344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1002 11:55:02.305784  384344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.305826  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305853  384344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1002 11:55:02.305893  384344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.305934  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305943  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.305999  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.306035  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.403560  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.403701  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1002 11:55:02.403791  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.403861  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.403983  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 11:55:02.404056  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:02.404148  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 11:55:02.404200  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:02.404274  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.512787  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 11:55:02.512909  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:02.513038  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1002 11:55:02.513062  384344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513091  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513169  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.2 (exists)
	I1002 11:55:02.513217  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 11:55:02.513258  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:02.513292  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1002 11:55:02.513343  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 11:55:02.513399  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:02.519549  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.2 (exists)
	I1002 11:55:02.529685  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.2 (exists)
	I1002 11:55:02.739233  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:03.573767  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:05.577137  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:07.577690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:06.191660  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.191697  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.191711  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.268234  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.268270  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.769081  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.775235  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:06.775267  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.268848  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.289255  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:07.289294  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.769010  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.776315  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:55:07.785543  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:07.785578  384965 api_server.go:131] duration metric: took 6.448776132s to wait for apiserver health ...
	I1002 11:55:07.785620  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:55:07.785630  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:07.963339  384965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:07.965036  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:08.003261  384965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:08.072023  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:08.084616  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:08.084657  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:08.084670  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:08.084680  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:08.084693  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:08.084709  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:08.084723  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:08.084737  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:08.084752  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:08.084767  384965 system_pods.go:74] duration metric: took 12.715919ms to wait for pod list to return data ...
	I1002 11:55:08.084783  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:08.089289  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:08.089323  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:08.089337  384965 node_conditions.go:105] duration metric: took 4.548285ms to run NodePressure ...
	I1002 11:55:08.089359  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:08.496528  384965 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509299  384965 kubeadm.go:787] kubelet initialised
	I1002 11:55:08.509331  384965 kubeadm.go:788] duration metric: took 12.771905ms waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509343  384965 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:08.516124  384965 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.528838  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.528938  384965 pod_ready.go:81] duration metric: took 12.780895ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.528967  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.529001  384965 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.534830  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534867  384965 pod_ready.go:81] duration metric: took 5.838075ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.534882  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534892  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.549854  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549885  384965 pod_ready.go:81] duration metric: took 14.983531ms waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.549900  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549913  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.559230  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559313  384965 pod_ready.go:81] duration metric: took 9.38728ms waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.559335  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559347  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.900163  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900190  384965 pod_ready.go:81] duration metric: took 340.83496ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.900199  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900208  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.516054  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516096  384965 pod_ready.go:81] duration metric: took 615.877294ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.516112  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516121  384965 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.701735  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701764  384965 pod_ready.go:81] duration metric: took 185.632721ms waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.701775  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701782  384965 pod_ready.go:38] duration metric: took 1.192428133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:09.701800  384965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:55:09.715441  384965 ops.go:34] apiserver oom_adj: -16
	I1002 11:55:09.715471  384965 kubeadm.go:640] restartCluster took 22.236518554s
	I1002 11:55:09.715483  384965 kubeadm.go:406] StartCluster complete in 22.288924118s
	I1002 11:55:09.715506  384965 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.715603  384965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:55:09.717604  384965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.832925  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:55:09.832958  384965 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:55:09.833045  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:55:09.833070  384965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833078  384965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833081  384965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833097  384965 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.833106  384965 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:55:09.833106  384965 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:09.833108  384965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-777999"
	W1002 11:55:09.833125  384965 addons.go:240] addon metrics-server should already be in state true
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833570  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833592  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833615  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833624  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833634  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833646  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.839134  384965 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-777999" context rescaled to 1 replicas
	I1002 11:55:09.839204  384965 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:55:09.882782  384965 out.go:177] * Verifying Kubernetes components...
	I1002 11:55:09.852478  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1002 11:55:09.853164  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1002 11:55:09.853212  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1002 11:55:09.884413  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:55:09.884847  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884862  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884978  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.885450  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885473  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885590  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885616  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885875  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885905  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.885931  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885991  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886291  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.886608  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886609  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886643  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.886650  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.890816  384965 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.890840  384965 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:55:09.890874  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.891346  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.891381  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.905399  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1002 11:55:09.905472  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1002 11:55:09.905949  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906013  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906516  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906548  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.906616  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906638  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.907044  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907050  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907204  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907296  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907802  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1002 11:55:09.908797  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.909184  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.911200  384965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:55:09.909554  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.909557  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.913028  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.913040  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:55:09.913097  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:55:09.913128  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.914961  384965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102329  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.589219551s)
	I1002 11:55:10.102369  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1002 11:55:10.102405  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102437  384344 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (7.58915139s)
	I1002 11:55:10.102467  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.2 (exists)
	I1002 11:55:10.102468  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102517  384344 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (7.363200276s)
	I1002 11:55:10.102554  384344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 11:55:10.102587  384344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102639  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:10.107376  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:09.913417  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.916644  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.916734  384965 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:09.916751  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:55:09.916773  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.917177  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.917217  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.917938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.917968  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.918238  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.918494  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.918725  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.919087  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.920001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920470  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.920499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920702  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.920898  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.921037  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.921164  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.936676  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1002 11:55:09.937243  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.937814  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.937838  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.938269  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.938503  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.940662  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.940930  384965 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:09.940952  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:55:09.940975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.944168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.944929  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.944938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.944972  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.945129  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.945323  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.945464  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:10.048027  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:10.064428  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:55:10.064457  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:55:10.113892  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:55:10.113922  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:55:10.162803  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:10.203352  384965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:10.203377  384965 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:55:10.209916  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:10.209945  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:55:10.283168  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:11.838556  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.790470973s)
	I1002 11:55:11.838584  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.675739061s)
	I1002 11:55:11.838618  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838620  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838659  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838635  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838886  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555664753s)
	I1002 11:55:11.838941  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838954  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.838980  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.838992  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838961  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839104  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839139  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839157  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839170  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839303  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839369  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.839409  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839421  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839431  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839688  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839700  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839710  384965 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:11.841889  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.841915  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.842201  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842253  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.842259  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842269  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.849511  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.849529  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.849874  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.849878  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.849901  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.853656  384965 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1002 11:55:10.075236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:12.576161  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:11.855303  384965 addons.go:502] enable addons completed in 2.022363817s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1002 11:55:12.217572  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:12.931492  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2: (2.828987001s)
	I1002 11:55:12.931534  384344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.824127868s)
	I1002 11:55:12.931594  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:55:12.931539  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 from cache
	I1002 11:55:12.931660  384344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931718  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931728  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:12.939018  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1002 11:55:14.293770  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362024408s)
	I1002 11:55:14.293812  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1002 11:55:14.293844  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:14.293919  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:15.843943  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2: (1.549996136s)
	I1002 11:55:15.843970  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 from cache
	I1002 11:55:15.843995  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.844044  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.077109  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:17.575669  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:14.219000  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:16.717611  384965 node_ready.go:49] node "default-k8s-diff-port-777999" has status "Ready":"True"
	I1002 11:55:16.717639  384965 node_ready.go:38] duration metric: took 6.514250616s waiting for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:16.717652  384965 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:16.724331  384965 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242058  384965 pod_ready.go:92] pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.242084  384965 pod_ready.go:81] duration metric: took 517.728305ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242093  384965 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247916  384965 pod_ready.go:92] pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.247946  384965 pod_ready.go:81] duration metric: took 5.844733ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247960  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.596133  384505 kubeadm.go:787] kubelet initialised
	I1002 11:55:18.596163  384505 kubeadm.go:788] duration metric: took 48.489169583s waiting for restarted kubelet to initialise ...
	I1002 11:55:18.596173  384505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:18.603606  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612080  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.612112  384505 pod_ready.go:81] duration metric: took 8.472159ms waiting for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612124  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618116  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.618147  384505 pod_ready.go:81] duration metric: took 6.014635ms waiting for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618159  384505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624120  384505 pod_ready.go:92] pod "etcd-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.624148  384505 pod_ready.go:81] duration metric: took 5.979959ms waiting for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624162  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631373  384505 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.631404  384505 pod_ready.go:81] duration metric: took 7.233318ms waiting for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631418  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990560  384505 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.990593  384505 pod_ready.go:81] duration metric: took 359.165649ms waiting for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990608  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.708531  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2: (1.864455947s)
	I1002 11:55:17.708567  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 from cache
	I1002 11:55:17.708616  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:17.708669  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:20.492385  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2: (2.783683562s)
	I1002 11:55:20.492427  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 from cache
	I1002 11:55:20.492455  384344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:20.492508  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:19.575875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:22.075666  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.526494  384965 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.526525  384965 pod_ready.go:81] duration metric: took 2.278556042s waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.526542  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927586  384965 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:20.927626  384965 pod_ready.go:81] duration metric: took 1.401074339s waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927641  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117907  384965 pod_ready.go:92] pod "kube-proxy-gchnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.117943  384965 pod_ready.go:81] duration metric: took 190.292051ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117957  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517768  384965 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.517788  384965 pod_ready.go:81] duration metric: took 399.822591ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517800  384965 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:23.829704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.390560  384505 pod_ready.go:92] pod "kube-proxy-gkhxb" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.390588  384505 pod_ready.go:81] duration metric: took 399.970888ms waiting for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.390602  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791405  384505 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.791443  384505 pod_ready.go:81] duration metric: took 400.826662ms waiting for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791458  384505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:22.098383  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:24.098434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:21.439323  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 11:55:21.439378  384344 cache_images.go:123] Successfully loaded all cached images
	I1002 11:55:21.439386  384344 cache_images.go:92] LoadImages completed in 19.585504619s
	I1002 11:55:21.439504  384344 ssh_runner.go:195] Run: crio config
	I1002 11:55:21.510657  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:21.510683  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:21.510703  384344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:55:21.510734  384344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-304121 NodeName:no-preload-304121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:55:21.511445  384344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-304121"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:55:21.511576  384344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-304121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:55:21.511643  384344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:55:21.522719  384344 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:55:21.522788  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:55:21.531557  384344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 11:55:21.548551  384344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:55:21.565791  384344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1002 11:55:21.583240  384344 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1002 11:55:21.587268  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:21.600487  384344 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121 for IP: 192.168.39.143
	I1002 11:55:21.600520  384344 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:21.600663  384344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:55:21.600697  384344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:55:21.600794  384344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/client.key
	I1002 11:55:21.600873  384344 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key.62e94479
	I1002 11:55:21.600926  384344 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key
	I1002 11:55:21.601033  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:55:21.601061  384344 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:55:21.601071  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:55:21.601093  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:55:21.601118  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:55:21.601146  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:55:21.601182  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:21.601818  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:55:21.626860  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:55:21.650402  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:55:21.678876  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:55:21.704351  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:55:21.729385  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:55:21.755185  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:55:21.779149  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:55:21.802775  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:55:21.825691  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:55:21.849575  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:55:21.872777  384344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:55:21.890629  384344 ssh_runner.go:195] Run: openssl version
	I1002 11:55:21.896382  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:55:21.906415  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911134  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911202  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.916782  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:55:21.926770  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:55:21.936394  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940874  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940944  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.946542  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:55:21.956590  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:55:21.966128  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971092  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971144  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.976625  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:55:21.987142  384344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:55:21.991548  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:55:21.998311  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:55:22.004302  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:55:22.010267  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:55:22.016280  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:55:22.022273  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:55:22.027921  384344 kubeadm.go:404] StartCluster: {Name:no-preload-304121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:55:22.028050  384344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:55:22.028141  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:22.068066  384344 cri.go:89] found id: ""
	I1002 11:55:22.068147  384344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:55:22.079381  384344 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:55:22.079406  384344 kubeadm.go:636] restartCluster start
	I1002 11:55:22.079471  384344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:55:22.088977  384344 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.090087  384344 kubeconfig.go:92] found "no-preload-304121" server: "https://192.168.39.143:8443"
	I1002 11:55:22.093401  384344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:55:22.103315  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.103378  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.114520  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.114538  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.114586  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.126040  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.626326  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.626438  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.637215  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.126863  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.126967  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.138035  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.626453  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.639113  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.126445  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.126541  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.139561  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.626423  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.626534  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.638442  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.127011  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.127103  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.139945  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.626451  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.638919  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:26.126459  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.126551  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.140068  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.574146  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.574656  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.329321  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.329400  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.098690  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.098837  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.626344  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.626445  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.641274  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.126886  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.126965  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.139451  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.627110  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.627264  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.640675  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.126212  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.126301  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.140048  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.626433  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.626530  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.639683  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.127030  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.127142  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.139681  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.626803  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.626878  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.639468  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.127126  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.127231  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.140930  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.626441  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.626535  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.639070  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:31.126421  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.126503  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.138724  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.074607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.830079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.832350  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.099074  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.596870  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.627189  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.627281  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.640362  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:32.104121  384344 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:55:32.104153  384344 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:55:32.104169  384344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:55:32.104223  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:32.147672  384344 cri.go:89] found id: ""
	I1002 11:55:32.147756  384344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:55:32.164049  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:55:32.174941  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:55:32.175041  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185756  384344 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185783  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:32.328093  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.120678  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.341378  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.433591  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.518381  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:55:33.518458  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:33.530334  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.043021  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.542602  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.042825  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.542484  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.042547  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.067551  384344 api_server.go:72] duration metric: took 2.549193903s to wait for apiserver process to appear ...
	I1002 11:55:36.067574  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:36.067593  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:33.076598  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.077561  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.575927  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.328950  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.330925  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:34.598649  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:36.598851  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.099902  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:40.195285  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.195318  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.195330  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.261287  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.261324  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.762016  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.776249  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:40.776279  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.262027  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.277940  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:41.277971  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.762404  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.767751  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 11:55:41.775963  384344 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:41.775988  384344 api_server.go:131] duration metric: took 5.708406738s to wait for apiserver health ...
	I1002 11:55:41.775997  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:41.776003  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:41.777791  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:40.076215  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.574607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.831982  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.330541  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.599812  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.097139  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.779495  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:41.796340  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:41.838383  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:41.863561  384344 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:41.863600  384344 system_pods.go:61] "coredns-5dd5756b68-hn8bw" [f388b655-7f90-436d-a1fd-458f22c7f5e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:41.863612  384344 system_pods.go:61] "etcd-no-preload-304121" [b45507da-d57a-45f5-82a3-37b273c42747] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:41.863621  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [7f8cdde0-5050-4cea-87c5-56bd0a5d623b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:41.863630  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [24d40a92-d549-48c8-bf5f-983fdc15dcae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:41.863641  384344 system_pods.go:61] "kube-proxy-cwvr7" [9e3f08e6-92ad-4ebc-afe3-44d5ab81a63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:41.863651  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [cc3c6828-f829-416a-9cfd-ddcc0f485578] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:41.863665  384344 system_pods.go:61] "metrics-server-57f55c9bc5-lrqt9" [7b70c72d-06b3-40ae-8e0c-ea4794cfe47b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:41.863682  384344 system_pods.go:61] "storage-provisioner" [457608a4-5ba9-45d2-841e-889930ce6bd7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:41.863694  384344 system_pods.go:74] duration metric: took 25.279676ms to wait for pod list to return data ...
	I1002 11:55:41.863707  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:41.870534  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:41.870580  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:41.870636  384344 node_conditions.go:105] duration metric: took 6.921999ms to run NodePressure ...
	I1002 11:55:41.870666  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:42.164858  384344 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169831  384344 kubeadm.go:787] kubelet initialised
	I1002 11:55:42.169855  384344 kubeadm.go:788] duration metric: took 4.969744ms waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169864  384344 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:42.176338  384344 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.195428  384344 pod_ready.go:102] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.195763  384344 pod_ready.go:92] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:46.195786  384344 pod_ready.go:81] duration metric: took 4.019424872s waiting for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:46.195795  384344 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.581249  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:47.074875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.331120  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.833248  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.099661  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.599051  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.217529  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:50.218641  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.575639  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.074550  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.329627  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.330613  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.330666  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.098233  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.098464  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.717990  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.716716  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:53.716751  384344 pod_ready.go:81] duration metric: took 7.520948071s waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:53.716769  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738808  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.738832  384344 pod_ready.go:81] duration metric: took 1.022054915s waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738841  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.743979  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.743997  384344 pod_ready.go:81] duration metric: took 5.14952ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.744006  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749813  384344 pod_ready.go:92] pod "kube-proxy-cwvr7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.749843  384344 pod_ready.go:81] duration metric: took 5.828956ms waiting for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749855  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913811  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.913840  384344 pod_ready.go:81] duration metric: took 163.97545ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913853  384344 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.075263  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:56.574518  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.829643  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:58.328816  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.597512  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.598176  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.221008  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.221092  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.221270  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.075344  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.576898  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:00.330184  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.332041  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.599606  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.098251  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.098441  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.222251  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:05.721050  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.577043  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.075021  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.829434  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.830586  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.830689  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.100229  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.597399  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:07.725911  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.222275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.574907  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:11.075011  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.831040  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.330226  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.599336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.601338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.721538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:14.732864  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.075225  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.575267  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.831410  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.328821  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.098085  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.598406  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.220843  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:19.221812  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.074885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.575220  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.830090  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.329239  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.108397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:22.597329  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:21.723316  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.220817  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:26.222858  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.075276  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.574332  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.574872  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.330095  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.831991  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.598737  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.098098  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:28.721424  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.721466  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.074535  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.075748  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.330155  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.830009  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:29.597397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:31.598389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.598490  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.223521  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.719548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:34.575020  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.074654  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.331567  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.832286  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.598829  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.599403  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.722451  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.223547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:39.075433  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:41.575885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.329838  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.330038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.099862  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.598269  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.723887  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.221944  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.075128  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.075540  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.331960  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.829987  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.097469  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.098616  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.222108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.721938  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:48.589935  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.074993  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.331749  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.830280  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.830731  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.598433  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.097486  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.098228  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.222646  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.726547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.076322  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:55.575236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.329005  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.330077  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.598418  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.098019  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:57.221753  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.721824  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.074481  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.576860  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.831342  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.328695  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:01.598124  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.098241  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:02.221634  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.222422  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.075152  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.076964  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.577621  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.328811  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.329223  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.598041  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.097384  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.724181  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.221108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.223407  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:10.077910  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:12.574292  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.331559  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.828655  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.829065  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.098632  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.099363  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.721785  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.222201  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:14.574467  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.576124  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.829618  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:17.830298  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.598739  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.097854  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.722947  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.220868  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:19.074608  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.079563  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.329680  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.335299  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.109847  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.598994  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.221458  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.222249  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.575662  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.075111  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:24.829500  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.830678  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.099426  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.598577  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.721159  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.725949  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:28.574416  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.576031  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.330079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:31.330829  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.829243  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.098615  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.598161  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.220933  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.720190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.075330  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.075824  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.574487  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.829585  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:38.333997  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.598838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.098682  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:36.723779  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.222751  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.074293  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:42.574665  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.829324  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.329265  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.598047  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.598338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:44.097421  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.720538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.721398  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.220972  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.074832  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.573962  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.330175  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.829115  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.097496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.098108  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.221977  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.222810  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.576755  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.076442  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.829764  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.330051  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.099771  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.599534  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.223223  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.721544  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.574341  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.574466  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.829215  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.829468  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.829730  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:55.097141  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.598230  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.221854  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.721190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.830156  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.329206  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.599838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:02.097630  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.099434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:01.724512  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.223282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.076896  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.576101  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.330313  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:07.830038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.597389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.098677  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.721370  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.723225  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.224608  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.076078  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:10.574982  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.575115  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.832412  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.330220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.597760  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.598933  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.726487  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.220404  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.575310  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.576156  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.330536  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.829762  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.833076  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.099600  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.599713  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.222118  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:20.722548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:19.076690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.575073  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.330604  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.829742  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.099777  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.598614  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.220183  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.221895  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.575355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.575510  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.830538  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.329783  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:26.097290  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.097568  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:27.722661  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.221305  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.074457  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.074944  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.075905  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.831228  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:33.328903  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.098502  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.599120  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.221445  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.224133  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.075953  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.574997  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.330632  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.830117  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.101830  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.597886  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.722453  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:38.722619  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.725507  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.077321  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.574812  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.329004  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:42.329704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.598243  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.600336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.098496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.225247  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:45.721116  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.073774  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.830119  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.330229  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.101053  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.597255  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.724301  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.220275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.074634  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.075498  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.576147  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:49.829149  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.328994  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.598113  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:53.096876  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.224282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.721074  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.576355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.074445  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.330474  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.331220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.829693  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:55.098655  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.598659  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.721698  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.721958  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.222685  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:59.074760  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.076178  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.551409  384787 pod_ready.go:81] duration metric: took 4m0.000833874s waiting for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:01.551453  384787 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:01.551481  384787 pod_ready.go:38] duration metric: took 4m12.797362192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:01.551549  384787 kubeadm.go:640] restartCluster took 4m35.116019688s
	W1002 11:59:01.551687  384787 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:01.551757  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:00.830381  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.830963  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:00.103080  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.600662  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:03.720777  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.722315  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.330034  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.835944  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.098121  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.098246  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:09.099171  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.725245  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.221073  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.328885  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:12.331198  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:11.599122  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.099609  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.268063  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.716271748s)
	I1002 11:59:15.268160  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:15.282632  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:15.294231  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:15.305847  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:15.305892  384787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:59:15.365627  384787 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:59:15.365703  384787 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:15.546049  384787 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:15.546175  384787 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:15.546300  384787 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:15.810889  384787 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:12.221147  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.222293  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.223901  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.813908  384787 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:15.814079  384787 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:15.814178  384787 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:15.814257  384787 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:15.814309  384787 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:15.814451  384787 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:15.814528  384787 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:15.814874  384787 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:15.815489  384787 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:15.816067  384787 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:15.816586  384787 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:15.817099  384787 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:15.817161  384787 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:15.988485  384787 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:16.038665  384787 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:16.218038  384787 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:16.415133  384787 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:16.415531  384787 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:16.418000  384787 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:16.420952  384787 out.go:204]   - Booting up control plane ...
	I1002 11:59:16.421147  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:16.421273  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:16.423255  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:16.442699  384787 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:16.443964  384787 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:16.444055  384787 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:59:16.602169  384787 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:14.331978  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.830188  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.831449  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.597731  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.598683  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.722865  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.222671  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.329396  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.518315  384965 pod_ready.go:81] duration metric: took 4m0.000482629s waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:21.518363  384965 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:21.518378  384965 pod_ready.go:38] duration metric: took 4m4.800712941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:21.518406  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:21.518451  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:21.518519  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:21.587182  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:21.587210  384965 cri.go:89] found id: ""
	I1002 11:59:21.587221  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:21.587285  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.592996  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:21.593072  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:21.635267  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:21.635293  384965 cri.go:89] found id: ""
	I1002 11:59:21.635306  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:21.635367  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.640347  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:21.640428  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:21.686113  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:21.686146  384965 cri.go:89] found id: ""
	I1002 11:59:21.686157  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:21.686224  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.691867  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:21.691959  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:21.745210  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:21.745245  384965 cri.go:89] found id: ""
	I1002 11:59:21.745257  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:21.745330  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.750774  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:21.750862  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:21.810054  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:21.810084  384965 cri.go:89] found id: ""
	I1002 11:59:21.810099  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:21.810161  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.815433  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:21.815518  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:21.858759  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:21.858794  384965 cri.go:89] found id: ""
	I1002 11:59:21.858807  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:21.858887  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.864818  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:21.864900  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:21.920312  384965 cri.go:89] found id: ""
	I1002 11:59:21.920343  384965 logs.go:284] 0 containers: []
	W1002 11:59:21.920353  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:21.920362  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:21.920429  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:21.964677  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:21.964708  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:21.964715  384965 cri.go:89] found id: ""
	I1002 11:59:21.964724  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:21.964812  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.970514  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.976118  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:21.976158  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:22.026289  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:22.026337  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:22.094330  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:22.094389  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:22.133879  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:22.133911  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:22.186645  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:22.186688  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:22.200091  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:22.200132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:22.245383  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:22.245420  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:22.312167  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:22.312212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:22.358596  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:22.358631  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:22.417643  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:22.417695  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:22.467793  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:22.467830  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:22.509173  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:22.509216  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:23.037502  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:23.037554  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:19.792274  384505 pod_ready.go:81] duration metric: took 4m0.000796599s waiting for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:19.792309  384505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:19.792337  384505 pod_ready.go:38] duration metric: took 4m1.196150969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:19.792389  384505 kubeadm.go:640] restartCluster took 5m11.202020009s
	W1002 11:59:19.792478  384505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:19.792509  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:24.926525  384505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.133982838s)
	I1002 11:59:24.926616  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:24.943054  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:24.953201  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:24.963105  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:24.963158  384505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 11:59:25.027860  384505 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1002 11:59:25.027986  384505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:25.214224  384505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:25.214399  384505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:25.214529  384505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:25.472019  384505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:25.472706  384505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:25.481965  384505 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1002 11:59:25.630265  384505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:25.105120  384787 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502545 seconds
	I1002 11:59:25.105321  384787 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:25.124191  384787 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:25.659886  384787 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:25.660110  384787 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-487027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:59:26.180742  384787 kubeadm.go:322] [bootstrap-token] Using token: tg9u90.7q86afgrs7pieyop
	I1002 11:59:23.723485  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:25.724673  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:26.182574  384787 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:26.182738  384787 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:26.190559  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:59:26.200659  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:26.212391  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:26.217946  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:26.226534  384787 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:26.248000  384787 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:59:26.545226  384787 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:26.604475  384787 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:26.605636  384787 kubeadm.go:322] 
	I1002 11:59:26.605726  384787 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:26.605738  384787 kubeadm.go:322] 
	I1002 11:59:26.605810  384787 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:26.605815  384787 kubeadm.go:322] 
	I1002 11:59:26.605844  384787 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:26.605914  384787 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:26.605973  384787 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:26.605981  384787 kubeadm.go:322] 
	I1002 11:59:26.606052  384787 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:59:26.606058  384787 kubeadm.go:322] 
	I1002 11:59:26.606097  384787 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:59:26.606101  384787 kubeadm.go:322] 
	I1002 11:59:26.606143  384787 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:26.606203  384787 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:26.606263  384787 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:26.606267  384787 kubeadm.go:322] 
	I1002 11:59:26.606334  384787 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:59:26.606438  384787 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:26.606446  384787 kubeadm.go:322] 
	I1002 11:59:26.606580  384787 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.606732  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:26.606764  384787 kubeadm.go:322] 	--control-plane 
	I1002 11:59:26.606773  384787 kubeadm.go:322] 
	I1002 11:59:26.606906  384787 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:26.606919  384787 kubeadm.go:322] 
	I1002 11:59:26.607066  384787 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.607192  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:26.608470  384787 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:26.608503  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:59:26.608547  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:26.610426  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:25.632074  384505 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:25.632197  384505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:25.632294  384505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:25.632398  384505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:25.632546  384505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:25.632693  384505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:25.633319  384505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:25.633417  384505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:25.633720  384505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:25.634302  384505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:25.635341  384505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:25.635391  384505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:25.635461  384505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:25.743684  384505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:25.940709  384505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:26.418951  384505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:26.676172  384505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:26.677698  384505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:26.612002  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:26.646809  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:26.709486  384787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:26.709648  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.709720  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=embed-certs-487027 minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.778472  384787 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:27.199359  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:27.351099  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:25.716079  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:25.739754  384965 api_server.go:72] duration metric: took 4m15.900505961s to wait for apiserver process to appear ...
	I1002 11:59:25.739785  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:25.739834  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:25.739904  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:25.788719  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:25.788747  384965 cri.go:89] found id: ""
	I1002 11:59:25.788758  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:25.788824  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.794426  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:25.794500  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:25.836689  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:25.836721  384965 cri.go:89] found id: ""
	I1002 11:59:25.836731  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:25.836808  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.841671  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:25.841744  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:25.883947  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:25.883976  384965 cri.go:89] found id: ""
	I1002 11:59:25.883986  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:25.884049  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.892631  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:25.892758  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:25.966469  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:25.966502  384965 cri.go:89] found id: ""
	I1002 11:59:25.966514  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:25.966575  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.971814  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:25.971890  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:26.020970  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.021002  384965 cri.go:89] found id: ""
	I1002 11:59:26.021013  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:26.021076  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.025582  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:26.025657  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:26.077339  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.077371  384965 cri.go:89] found id: ""
	I1002 11:59:26.077383  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:26.077448  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.082311  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:26.082396  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:26.126803  384965 cri.go:89] found id: ""
	I1002 11:59:26.126833  384965 logs.go:284] 0 containers: []
	W1002 11:59:26.126843  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:26.126851  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:26.126992  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:26.176829  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.176858  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.176866  384965 cri.go:89] found id: ""
	I1002 11:59:26.176876  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:26.176945  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.182892  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.189288  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:26.189316  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.257856  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:26.257910  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.297691  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:26.297747  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:26.351211  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:26.351254  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:26.425373  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:26.425416  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:26.568944  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:26.568985  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.627406  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:26.627449  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:26.641249  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:26.641281  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:26.696939  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:26.696974  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.744365  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:26.744406  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:27.279579  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:27.279639  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:27.366447  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:27.366508  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:27.436429  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:27.436476  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:26.679464  384505 out.go:204]   - Booting up control plane ...
	I1002 11:59:26.679594  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:26.688060  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:26.700892  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:26.702245  384505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:26.706277  384505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:28.222692  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:30.223561  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:27.973079  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.472938  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.973900  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.473650  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.972984  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.473216  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.973931  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.474026  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.973024  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:32.473723  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.989828  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:59:29.995664  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:59:29.998819  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:29.998846  384965 api_server.go:131] duration metric: took 4.25905343s to wait for apiserver health ...
	I1002 11:59:29.998855  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:29.998882  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:29.998944  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:30.037898  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.037925  384965 cri.go:89] found id: ""
	I1002 11:59:30.037935  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:30.038014  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.042751  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:30.042835  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:30.085339  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.085378  384965 cri.go:89] found id: ""
	I1002 11:59:30.085390  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:30.085463  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.090184  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:30.090265  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:30.130574  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.130602  384965 cri.go:89] found id: ""
	I1002 11:59:30.130611  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:30.130665  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.135040  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:30.135125  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:30.178044  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:30.178067  384965 cri.go:89] found id: ""
	I1002 11:59:30.178078  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:30.178144  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.182586  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:30.182662  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:30.226121  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:30.226142  384965 cri.go:89] found id: ""
	I1002 11:59:30.226152  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:30.226209  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.231080  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:30.231156  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:30.275499  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.275533  384965 cri.go:89] found id: ""
	I1002 11:59:30.275545  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:30.275611  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.281023  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:30.281089  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:30.325580  384965 cri.go:89] found id: ""
	I1002 11:59:30.325610  384965 logs.go:284] 0 containers: []
	W1002 11:59:30.325622  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:30.325630  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:30.325691  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:30.372727  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.372760  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.372766  384965 cri.go:89] found id: ""
	I1002 11:59:30.372776  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:30.372838  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.377541  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.382371  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:30.382403  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:30.449081  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:30.449132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.519339  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:30.519392  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.566205  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:30.566250  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.607933  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:30.607973  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:30.655904  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:30.655946  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.717563  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:30.717619  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.779216  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:30.779268  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.822075  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:30.822114  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:31.180609  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:31.180664  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:31.196239  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:31.196274  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:31.345274  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:31.345318  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:31.392175  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:31.392212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:33.946599  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:33.946635  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.946643  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.946650  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.946656  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.946659  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.946664  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.946677  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.946687  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.946704  384965 system_pods.go:74] duration metric: took 3.947840874s to wait for pod list to return data ...
	I1002 11:59:33.946715  384965 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:33.950028  384965 default_sa.go:45] found service account: "default"
	I1002 11:59:33.950059  384965 default_sa.go:55] duration metric: took 3.333093ms for default service account to be created ...
	I1002 11:59:33.950069  384965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:33.956623  384965 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:33.956651  384965 system_pods.go:89] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.956657  384965 system_pods.go:89] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.956662  384965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.956666  384965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.956670  384965 system_pods.go:89] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.956674  384965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.956681  384965 system_pods.go:89] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.956686  384965 system_pods.go:89] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.956694  384965 system_pods.go:126] duration metric: took 6.618721ms to wait for k8s-apps to be running ...
	I1002 11:59:33.956704  384965 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:33.956749  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:33.976674  384965 system_svc.go:56] duration metric: took 19.952308ms WaitForService to wait for kubelet.
	I1002 11:59:33.976710  384965 kubeadm.go:581] duration metric: took 4m24.137472355s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:33.976750  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:33.982173  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:33.982211  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:33.982227  384965 node_conditions.go:105] duration metric: took 5.470843ms to run NodePressure ...
	I1002 11:59:33.982242  384965 start.go:228] waiting for startup goroutines ...
	I1002 11:59:33.982251  384965 start.go:233] waiting for cluster config update ...
	I1002 11:59:33.982303  384965 start.go:242] writing updated cluster config ...
	I1002 11:59:33.982687  384965 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:34.039684  384965 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:34.041739  384965 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-777999" cluster and "default" namespace by default
	I1002 11:59:32.723475  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:35.221523  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:32.973400  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.473644  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.973820  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.473607  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.973848  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.473328  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.973485  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.473888  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.973837  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.473514  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.973633  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.094807  384787 kubeadm.go:1081] duration metric: took 11.38520709s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:38.094846  384787 kubeadm.go:406] StartCluster complete in 5m11.722637512s
	I1002 11:59:38.094872  384787 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.094972  384787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:38.097201  384787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.097495  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:38.097829  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:59:38.097966  384787 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:38.098056  384787 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-487027"
	I1002 11:59:38.098079  384787 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-487027"
	I1002 11:59:38.098083  384787 addons.go:69] Setting default-storageclass=true in profile "embed-certs-487027"
	I1002 11:59:38.098098  384787 addons.go:69] Setting metrics-server=true in profile "embed-certs-487027"
	I1002 11:59:38.098110  384787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-487027"
	I1002 11:59:38.098113  384787 addons.go:231] Setting addon metrics-server=true in "embed-certs-487027"
	W1002 11:59:38.098125  384787 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:38.098177  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098608  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098643  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098647  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1002 11:59:38.098092  384787 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:38.098827  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098670  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.099207  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.099235  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.118215  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I1002 11:59:38.118691  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.119232  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.119260  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.119649  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.120147  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.120182  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.129398  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1002 11:59:38.129652  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1002 11:59:38.130092  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.130723  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.130746  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.131301  384787 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-487027" context rescaled to 1 replicas
	I1002 11:59:38.131342  384787 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:38.133196  384787 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:38.134675  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:38.132825  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.134964  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.135242  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.135408  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.135434  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.135834  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.136413  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.136455  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.138974  384787 addons.go:231] Setting addon default-storageclass=true in "embed-certs-487027"
	W1002 11:59:38.138995  384787 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:38.139025  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.139434  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.139469  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.141226  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1002 11:59:38.141643  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.142086  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.142104  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.142433  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.142609  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.144425  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.146525  384787 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:38.148187  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:38.148204  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:38.148227  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.152187  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152549  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.152575  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152783  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.152988  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.153139  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.153280  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.157114  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 11:59:38.157655  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.158192  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.158211  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.158619  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.159253  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.159290  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.159506  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I1002 11:59:38.159895  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.160383  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.160395  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.160727  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.160902  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.162835  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.164490  384787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:37.211498  384505 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504818 seconds
	I1002 11:59:37.211660  384505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:37.229976  384505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:37.759297  384505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:37.759467  384505 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-749860 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:59:38.284135  384505 kubeadm.go:322] [bootstrap-token] Using token: rt49x4.7033jvaiaszsonci
	I1002 11:59:38.285950  384505 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:38.286108  384505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:38.299290  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:38.306326  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:38.312137  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:38.320028  384505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:38.439411  384505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:38.704007  384505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:38.705937  384505 kubeadm.go:322] 
	I1002 11:59:38.706075  384505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:38.706096  384505 kubeadm.go:322] 
	I1002 11:59:38.706210  384505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:38.706221  384505 kubeadm.go:322] 
	I1002 11:59:38.706256  384505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:38.706341  384505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:38.706433  384505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:38.706448  384505 kubeadm.go:322] 
	I1002 11:59:38.706527  384505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:38.706614  384505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:38.706701  384505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:38.706712  384505 kubeadm.go:322] 
	I1002 11:59:38.706805  384505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 11:59:38.706898  384505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:38.706910  384505 kubeadm.go:322] 
	I1002 11:59:38.707003  384505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707134  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:38.707169  384505 kubeadm.go:322]     --control-plane 	  
	I1002 11:59:38.707179  384505 kubeadm.go:322] 
	I1002 11:59:38.707272  384505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:38.707283  384505 kubeadm.go:322] 
	I1002 11:59:38.707373  384505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707500  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:38.708451  384505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:38.708478  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:59:38.708501  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:38.710166  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:38.711596  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:38.725385  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:38.748155  384505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:38.748294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.748295  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-749860 minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.795585  384505 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:39.068200  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.166036  384787 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.166047  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:38.166063  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.169435  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.169903  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.169929  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.170098  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.170273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.170517  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.170711  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.177450  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I1002 11:59:38.178044  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.178596  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.178616  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.179009  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.179244  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.181209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.181596  384787 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.181613  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:38.181641  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.185272  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.185785  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.185813  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.186245  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.186539  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.186748  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.186938  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.337092  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:38.337129  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:38.379388  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.389992  384787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.390060  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:38.399264  384787 node_ready.go:49] node "embed-certs-487027" has status "Ready":"True"
	I1002 11:59:38.399295  384787 node_ready.go:38] duration metric: took 9.264648ms waiting for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.399308  384787 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:38.401885  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:38.401909  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:38.406757  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.438158  384787 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.458749  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.458784  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:38.517143  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.547128  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.547161  384787 pod_ready.go:81] duration metric: took 108.899374ms waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.547176  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744560  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.744587  384787 pod_ready.go:81] duration metric: took 197.40322ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744598  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852242  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.852277  384787 pod_ready.go:81] duration metric: took 107.671499ms waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852294  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.017545  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638113738s)
	I1002 11:59:41.017602  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017613  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017597  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.627499125s)
	I1002 11:59:41.017658  384787 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:41.017718  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.610925223s)
	I1002 11:59:41.017747  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017759  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017907  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.017960  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.017977  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017994  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018535  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018549  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018559  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.018568  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018636  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018645  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018679  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019046  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019049  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.019064  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.027153  384787 pod_ready.go:102] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.049978  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.050007  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.050369  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.050391  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.100800  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.583606696s)
	I1002 11:59:41.100870  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.100900  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101237  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101258  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101268  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.101278  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101576  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.101621  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101634  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101647  384787 addons.go:467] Verifying addon metrics-server=true in "embed-certs-487027"
	I1002 11:59:41.103637  384787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:37.222165  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:39.223800  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.105142  384787 addons.go:502] enable addons completed in 3.007188775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:41.492039  384787 pod_ready.go:92] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.492067  384787 pod_ready.go:81] duration metric: took 2.639765498s waiting for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.492081  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500950  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.500979  384787 pod_ready.go:81] duration metric: took 8.889098ms waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500990  384787 pod_ready.go:38] duration metric: took 3.101668727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:41.501012  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:41.501079  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:41.533141  384787 api_server.go:72] duration metric: took 3.401757173s to wait for apiserver process to appear ...
	I1002 11:59:41.533167  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:41.533183  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:59:41.543027  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:59:41.545456  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:41.545483  384787 api_server.go:131] duration metric: took 12.308941ms to wait for apiserver health ...
	I1002 11:59:41.545494  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:41.556090  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:41.556183  384787 system_pods.go:61] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.556209  384787 system_pods.go:61] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.556247  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.556272  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.556290  384787 system_pods.go:61] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.556306  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.556329  384787 system_pods.go:61] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.556366  384787 system_pods.go:61] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.556392  384787 system_pods.go:74] duration metric: took 10.889958ms to wait for pod list to return data ...
	I1002 11:59:41.556412  384787 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:41.594659  384787 default_sa.go:45] found service account: "default"
	I1002 11:59:41.594690  384787 default_sa.go:55] duration metric: took 38.261546ms for default service account to be created ...
	I1002 11:59:41.594701  384787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:41.800342  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:41.800375  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.800382  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.800388  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.800393  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.800397  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.800401  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.800407  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.800412  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.800431  384787 retry.go:31] will retry after 300.830497ms: missing components: kube-dns
	I1002 11:59:42.116978  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.117028  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.117039  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.117048  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.117058  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.117064  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.117071  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.117080  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.117089  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.117109  384787 retry.go:31] will retry after 380.49084ms: missing components: kube-dns
	I1002 11:59:42.506867  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.506901  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.506908  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.506914  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.506919  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.506923  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.506927  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.506933  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.506939  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.506954  384787 retry.go:31] will retry after 409.062449ms: missing components: kube-dns
	I1002 11:59:42.924401  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.924443  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.924456  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.924464  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.924471  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.924477  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.924484  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.924493  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.924503  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.924524  384787 retry.go:31] will retry after 544.758887ms: missing components: kube-dns
	I1002 11:59:43.477592  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:43.477622  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Running
	I1002 11:59:43.477628  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:43.477632  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:43.477637  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:43.477640  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:43.477645  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:43.477651  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:43.477657  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Running
	I1002 11:59:43.477665  384787 system_pods.go:126] duration metric: took 1.882959518s to wait for k8s-apps to be running ...
	I1002 11:59:43.477672  384787 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:43.477714  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:43.492105  384787 system_svc.go:56] duration metric: took 14.416995ms WaitForService to wait for kubelet.
	I1002 11:59:43.492138  384787 kubeadm.go:581] duration metric: took 5.360761991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:43.492161  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:43.496739  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:43.496769  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:43.496785  384787 node_conditions.go:105] duration metric: took 4.61842ms to run NodePressure ...
	I1002 11:59:43.496801  384787 start.go:228] waiting for startup goroutines ...
	I1002 11:59:43.496810  384787 start.go:233] waiting for cluster config update ...
	I1002 11:59:43.496823  384787 start.go:242] writing updated cluster config ...
	I1002 11:59:43.497156  384787 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:43.568627  384787 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:43.570324  384787 out.go:177] * Done! kubectl is now configured to use "embed-certs-487027" cluster and "default" namespace by default
	I1002 11:59:39.194035  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:39.810338  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.310222  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.809912  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.310004  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.810506  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.309581  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.810312  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.310294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.809602  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.722699  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.221300  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.309927  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:44.810169  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.310095  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.809546  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.310144  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.809605  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.310487  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.809697  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.309464  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.809680  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.723036  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.220863  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:51.221417  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.310000  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:49.809922  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.310214  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.809728  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.309659  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.809723  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.309837  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.809788  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.309655  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.809468  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.310103  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.810421  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.968150  384505 kubeadm.go:1081] duration metric: took 16.219921091s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:54.968184  384505 kubeadm.go:406] StartCluster complete in 5m46.426951815s
	I1002 11:59:54.968203  384505 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.968302  384505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:54.970101  384505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.970429  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:54.970599  384505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:54.970672  384505 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970692  384505 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-749860"
	W1002 11:59:54.970703  384505 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:54.970723  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:59:54.970753  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.970775  384505 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970792  384505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-749860"
	I1002 11:59:54.971196  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971204  384505 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-749860"
	I1002 11:59:54.971226  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971199  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971240  384505 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-749860"
	W1002 11:59:54.971251  384505 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:54.971281  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971297  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.971669  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971707  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.989112  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 11:59:54.989701  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.989819  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1002 11:59:54.989971  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 11:59:54.990503  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990552  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990574  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.990592  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.990975  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991042  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991062  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991094  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991110  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991327  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:54.991555  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991596  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.992169  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992183  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992197  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.992206  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.998018  384505 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-749860"
	W1002 11:59:54.998043  384505 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:54.998067  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.998716  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.003322  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.020037  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1002 11:59:55.020659  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.021292  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.021313  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.021707  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.021896  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.022155  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1002 11:59:55.022286  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I1002 11:59:55.022697  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024740  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.024793  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024824  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.024839  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.027065  384505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:55.025237  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.025561  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.028415  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.028568  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:55.028579  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:55.028596  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.028867  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.029051  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.030397  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.030424  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.031461  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.033181  384505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:55.032032  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.032651  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.034670  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.034698  384505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.034703  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.034711  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:55.034727  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.034894  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.035089  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.035269  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.046534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046573  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.046599  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046629  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.046888  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.047102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.047276  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.051887  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1002 11:59:55.052372  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.052940  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.052970  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.053349  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.053558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.055503  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.055762  384505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.055780  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:55.055805  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.062494  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062526  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.062542  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062550  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.062752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.062922  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.063162  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.103907  384505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-749860" context rescaled to 1 replicas
	I1002 11:59:55.103958  384505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:55.105626  384505 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:53.722331  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:54.914848  384344 pod_ready.go:81] duration metric: took 4m0.000973055s waiting for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:54.914899  384344 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:54.914926  384344 pod_ready.go:38] duration metric: took 4m12.745047876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:54.914963  384344 kubeadm.go:640] restartCluster took 4m32.83554771s
	W1002 11:59:54.915062  384344 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:54.915098  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:55.106948  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:55.283274  384505 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.283336  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:55.291603  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:55.291629  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:55.297775  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.321901  384505 node_ready.go:49] node "old-k8s-version-749860" has status "Ready":"True"
	I1002 11:59:55.321927  384505 node_ready.go:38] duration metric: took 38.615436ms waiting for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.321939  384505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:55.327570  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.355612  384505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:55.357164  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:55.357187  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:55.423852  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:55.423883  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:55.477683  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:56.041846  384505 start.go:923] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:56.230394  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230432  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230466  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230488  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230810  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230869  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230888  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230913  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230936  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230890  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230969  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230990  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.231024  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.231326  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231341  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231652  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231667  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231740  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.327260  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.327289  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.327633  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.327654  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547462  384505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.069727635s)
	I1002 11:59:56.547536  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.547549  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.547901  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.547948  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.547974  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547993  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.548010  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.548288  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.548321  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.548322  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.548333  384505 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-749860"
	I1002 11:59:56.550084  384505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:56.551798  384505 addons.go:502] enable addons completed in 1.581195105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:57.554993  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:59.933613  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:01.937565  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:04.431925  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:05.433988  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.434013  384505 pod_ready.go:81] duration metric: took 10.078369703s waiting for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.434029  384505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441501  384505 pod_ready.go:92] pod "kube-proxy-mdtp5" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.441534  384505 pod_ready.go:81] duration metric: took 7.496823ms waiting for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441543  384505 pod_ready.go:38] duration metric: took 10.1195912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:05.441592  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:05.441680  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:05.460054  384505 api_server.go:72] duration metric: took 10.356049869s to wait for apiserver process to appear ...
	I1002 12:00:05.460080  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:05.460100  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 12:00:05.466796  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 12:00:05.467813  384505 api_server.go:141] control plane version: v1.16.0
	I1002 12:00:05.467845  384505 api_server.go:131] duration metric: took 7.75678ms to wait for apiserver health ...
	I1002 12:00:05.467855  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:05.472349  384505 system_pods.go:59] 4 kube-system pods found
	I1002 12:00:05.472384  384505 system_pods.go:61] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.472391  384505 system_pods.go:61] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.472401  384505 system_pods.go:61] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.472410  384505 system_pods.go:61] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.472433  384505 system_pods.go:74] duration metric: took 4.569442ms to wait for pod list to return data ...
	I1002 12:00:05.472446  384505 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:05.476327  384505 default_sa.go:45] found service account: "default"
	I1002 12:00:05.476349  384505 default_sa.go:55] duration metric: took 3.895344ms for default service account to be created ...
	I1002 12:00:05.476357  384505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:05.480522  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.480545  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.480550  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.480557  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.480563  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.480579  384505 retry.go:31] will retry after 270.891275ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:05.757515  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.757555  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.757563  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.757574  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.757585  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.757603  384505 retry.go:31] will retry after 336.725562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.099945  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.099978  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.099985  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.099995  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.100002  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.100024  384505 retry.go:31] will retry after 389.53153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.504317  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.504354  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.504362  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.504375  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.504385  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.504407  384505 retry.go:31] will retry after 453.465732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.962509  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.962534  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.962539  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.962546  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.962552  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.962568  384505 retry.go:31] will retry after 489.820063ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:07.457422  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:07.457451  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:07.457456  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:07.457465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:07.457472  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:07.457490  384505 retry.go:31] will retry after 931.079053ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:08.394500  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:08.394527  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:08.394532  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:08.394538  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:08.394546  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:08.394562  384505 retry.go:31] will retry after 929.512162ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:09.216426  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.301296702s)
	I1002 12:00:09.216493  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:09.230712  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:00:09.239588  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:00:09.248624  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:00:09.248677  384344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:00:09.466935  384344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:00:09.329677  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:09.329709  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:09.329714  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:09.329722  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:09.329728  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:09.329746  384505 retry.go:31] will retry after 898.08397ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:10.232119  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:10.232155  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:10.232163  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:10.232176  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:10.232185  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:10.232212  384505 retry.go:31] will retry after 1.809149678s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:12.047424  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:12.047452  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:12.047458  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:12.047465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:12.047471  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:12.047487  384505 retry.go:31] will retry after 2.054960799s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:14.109048  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:14.109080  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:14.109088  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:14.109098  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:14.109108  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:14.109128  384505 retry.go:31] will retry after 2.523219254s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:16.640373  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:16.640399  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:16.640405  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:16.640412  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:16.640419  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:16.640436  384505 retry.go:31] will retry after 2.61022195s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:19.606412  384344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:00:19.606505  384344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:00:19.606620  384344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:00:19.606760  384344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:00:19.606856  384344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:00:19.606912  384344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:00:19.608541  384344 out.go:204]   - Generating certificates and keys ...
	I1002 12:00:19.608638  384344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:00:19.608743  384344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:00:19.608891  384344 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 12:00:19.608999  384344 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 12:00:19.609113  384344 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 12:00:19.609193  384344 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 12:00:19.609276  384344 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 12:00:19.609360  384344 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 12:00:19.609453  384344 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 12:00:19.609548  384344 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 12:00:19.609624  384344 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 12:00:19.609694  384344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:00:19.609761  384344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:00:19.609833  384344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:00:19.609916  384344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:00:19.609991  384344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:00:19.610100  384344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:00:19.610182  384344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:00:19.611696  384344 out.go:204]   - Booting up control plane ...
	I1002 12:00:19.611810  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:00:19.611916  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:00:19.612021  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:00:19.612173  384344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:00:19.612294  384344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:00:19.612346  384344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:00:19.612576  384344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:00:19.612683  384344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 12:00:19.612825  384344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:00:19.612943  384344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:00:19.613026  384344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:00:19.613215  384344 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-304121 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:00:19.613266  384344 kubeadm.go:322] [bootstrap-token] Using token: pd40pp.2tkeaw4x1d1qfkq9
	I1002 12:00:19.614472  384344 out.go:204]   - Configuring RBAC rules ...
	I1002 12:00:19.614593  384344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:00:19.614706  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:00:19.614912  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:00:19.615054  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:00:19.615220  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:00:19.615315  384344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:00:19.615474  384344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:00:19.615540  384344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:00:19.615622  384344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:00:19.615633  384344 kubeadm.go:322] 
	I1002 12:00:19.615725  384344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:00:19.615747  384344 kubeadm.go:322] 
	I1002 12:00:19.615851  384344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:00:19.615864  384344 kubeadm.go:322] 
	I1002 12:00:19.615894  384344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:00:19.615997  384344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:00:19.616084  384344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:00:19.616094  384344 kubeadm.go:322] 
	I1002 12:00:19.616143  384344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:00:19.616152  384344 kubeadm.go:322] 
	I1002 12:00:19.616222  384344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:00:19.616240  384344 kubeadm.go:322] 
	I1002 12:00:19.616321  384344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:00:19.616420  384344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:00:19.616532  384344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:00:19.616548  384344 kubeadm.go:322] 
	I1002 12:00:19.616640  384344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:00:19.616734  384344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:00:19.616743  384344 kubeadm.go:322] 
	I1002 12:00:19.616857  384344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617005  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:00:19.617049  384344 kubeadm.go:322] 	--control-plane 
	I1002 12:00:19.617059  384344 kubeadm.go:322] 
	I1002 12:00:19.617136  384344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:00:19.617142  384344 kubeadm.go:322] 
	I1002 12:00:19.617238  384344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617333  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:00:19.617371  384344 cni.go:84] Creating CNI manager for ""
	I1002 12:00:19.617384  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:00:19.618962  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:00:19.620215  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:00:19.650698  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:00:19.699458  384344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:00:19.699594  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=no-preload-304121 minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.699598  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.810984  384344 ops.go:34] apiserver oom_adj: -16
	I1002 12:00:20.114460  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.245669  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.876563  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.256294  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:19.256319  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:19.256325  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:19.256332  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:19.256338  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:19.256355  384505 retry.go:31] will retry after 3.270215577s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:22.532684  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:22.532714  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:22.532723  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:22.532730  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:22.532737  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:22.532754  384505 retry.go:31] will retry after 5.273561216s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:21.376620  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:21.876453  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.376537  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.876967  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.377242  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.876469  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.376391  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.877422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.376422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.877251  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.810777  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:27.810810  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:27.810816  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:27.810822  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:27.810828  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:27.810845  384505 retry.go:31] will retry after 6.34425242s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:26.376388  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:26.877267  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.376480  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.877214  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.376560  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.876964  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.377314  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.877135  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.377301  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.876525  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.376660  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.876991  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.376934  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.584774  384344 kubeadm.go:1081] duration metric: took 12.88524826s to wait for elevateKubeSystemPrivileges.
	I1002 12:00:32.584821  384344 kubeadm.go:406] StartCluster complete in 5m10.55691254s
	I1002 12:00:32.584849  384344 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.584955  384344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:00:32.587722  384344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.588018  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:00:32.588146  384344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:00:32.588230  384344 addons.go:69] Setting default-storageclass=true in profile "no-preload-304121"
	I1002 12:00:32.588251  384344 addons.go:69] Setting metrics-server=true in profile "no-preload-304121"
	I1002 12:00:32.588265  384344 addons.go:231] Setting addon metrics-server=true in "no-preload-304121"
	W1002 12:00:32.588273  384344 addons.go:240] addon metrics-server should already be in state true
	I1002 12:00:32.588252  384344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-304121"
	I1002 12:00:32.588323  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:00:32.588333  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588229  384344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-304121"
	I1002 12:00:32.588387  384344 addons.go:231] Setting addon storage-provisioner=true in "no-preload-304121"
	W1002 12:00:32.588397  384344 addons.go:240] addon storage-provisioner should already be in state true
	I1002 12:00:32.588433  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588695  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588731  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588737  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588777  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588867  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588891  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.612093  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I1002 12:00:32.612118  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I1002 12:00:32.612252  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1002 12:00:32.612652  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612799  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612847  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.613307  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613337  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613432  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613504  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613715  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.613718  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613838  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613955  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.614146  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614197  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614802  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.614842  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.615497  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.615534  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.617844  384344 addons.go:231] Setting addon default-storageclass=true in "no-preload-304121"
	W1002 12:00:32.617884  384344 addons.go:240] addon default-storageclass should already be in state true
	I1002 12:00:32.617914  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.618326  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.618436  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.634123  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I1002 12:00:32.634849  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.634953  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 12:00:32.635328  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.635470  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635495  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635819  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635841  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635867  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636193  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636340  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.636373  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.636435  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.637717  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1002 12:00:32.638051  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.640160  384344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 12:00:32.642288  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 12:00:32.642300  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 12:00:32.642314  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.640240  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.642837  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.642863  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.643527  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.643695  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.645514  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.645565  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.648157  384344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:00:32.645977  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.646152  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.650297  384344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.650313  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:00:32.650328  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.650380  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.650547  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.650823  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.650961  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.653953  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654560  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.654592  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654886  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.655049  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.655195  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.655410  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.658005  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1002 12:00:32.658525  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.659046  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.659059  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.659478  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.659611  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.661708  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.661982  384344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:32.661998  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:00:32.662018  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.664637  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665005  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.665023  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665161  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.665335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.665426  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.665610  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.723429  384344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-304121" context rescaled to 1 replicas
	I1002 12:00:32.723469  384344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:00:32.725329  384344 out.go:177] * Verifying Kubernetes components...
	I1002 12:00:32.726924  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:32.860425  384344 node_ready.go:35] waiting up to 6m0s for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.860515  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:00:32.904658  384344 node_ready.go:49] node "no-preload-304121" has status "Ready":"True"
	I1002 12:00:32.904689  384344 node_ready.go:38] duration metric: took 44.230643ms waiting for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.904705  384344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:32.949887  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:32.984050  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.997841  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 12:00:32.997869  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 12:00:32.999235  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:33.082015  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 12:00:33.082051  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 12:00:33.326524  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:33.326554  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 12:00:33.403533  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:34.844716  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984135314s)
	I1002 12:00:34.844752  384344 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 12:00:35.114639  384344 pod_ready.go:102] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:35.538571  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.55447937s)
	I1002 12:00:35.538624  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538641  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.538652  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.539381648s)
	I1002 12:00:35.538700  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538713  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539005  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539027  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539039  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539049  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539137  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539162  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539176  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539194  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539203  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539299  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539328  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539341  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539537  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539588  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539622  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.596015  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.596048  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.596384  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.596431  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.596449  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.641915  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.238327482s)
	I1002 12:00:35.641985  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642007  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642363  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642389  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642399  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642409  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642423  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.642716  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642739  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642750  384344 addons.go:467] Verifying addon metrics-server=true in "no-preload-304121"
	I1002 12:00:35.644696  384344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 12:00:35.646046  384344 addons.go:502] enable addons completed in 3.05790546s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 12:00:36.113386  384344 pod_ready.go:92] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.113415  384344 pod_ready.go:81] duration metric: took 3.163496821s waiting for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.113429  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.116264  384344 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116290  384344 pod_ready.go:81] duration metric: took 2.85415ms waiting for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	E1002 12:00:36.116300  384344 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116306  384344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126555  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.126575  384344 pod_ready.go:81] duration metric: took 10.262082ms waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126583  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137876  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.137903  384344 pod_ready.go:81] duration metric: took 11.312511ms waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137916  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146526  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.146549  384344 pod_ready.go:81] duration metric: took 8.624341ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146561  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307205  384344 pod_ready.go:92] pod "kube-proxy-sprhm" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.307231  384344 pod_ready.go:81] duration metric: took 160.663088ms waiting for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307241  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707429  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.707455  384344 pod_ready.go:81] duration metric: took 400.207608ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707463  384344 pod_ready.go:38] duration metric: took 3.802745796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:36.707480  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:36.707537  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:36.733934  384344 api_server.go:72] duration metric: took 4.010431274s to wait for apiserver process to appear ...
	I1002 12:00:36.733962  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:36.733979  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 12:00:36.740562  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 12:00:36.742234  384344 api_server.go:141] control plane version: v1.28.2
	I1002 12:00:36.742259  384344 api_server.go:131] duration metric: took 8.289515ms to wait for apiserver health ...
	I1002 12:00:36.742270  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:36.910934  384344 system_pods.go:59] 8 kube-system pods found
	I1002 12:00:36.910962  384344 system_pods.go:61] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:36.910967  384344 system_pods.go:61] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:36.910971  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:36.910976  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:36.910980  384344 system_pods.go:61] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:36.910983  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:36.910991  384344 system_pods.go:61] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:36.911002  384344 system_pods.go:61] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 12:00:36.911013  384344 system_pods.go:74] duration metric: took 168.734676ms to wait for pod list to return data ...
	I1002 12:00:36.911027  384344 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:37.106994  384344 default_sa.go:45] found service account: "default"
	I1002 12:00:37.107038  384344 default_sa.go:55] duration metric: took 196.001935ms for default service account to be created ...
	I1002 12:00:37.107050  384344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:37.310973  384344 system_pods.go:86] 8 kube-system pods found
	I1002 12:00:37.311012  384344 system_pods.go:89] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:37.311021  384344 system_pods.go:89] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:37.311028  384344 system_pods.go:89] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:37.311034  384344 system_pods.go:89] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:37.311041  384344 system_pods.go:89] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:37.311049  384344 system_pods.go:89] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:37.311060  384344 system_pods.go:89] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:37.311075  384344 system_pods.go:89] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Running
	I1002 12:00:37.311093  384344 system_pods.go:126] duration metric: took 204.035391ms to wait for k8s-apps to be running ...
	I1002 12:00:37.311103  384344 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:00:37.311158  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:37.327711  384344 system_svc.go:56] duration metric: took 16.597865ms WaitForService to wait for kubelet.
	I1002 12:00:37.327736  384344 kubeadm.go:581] duration metric: took 4.604243467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:00:37.327758  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:00:37.506633  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:00:37.506693  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 12:00:37.506708  384344 node_conditions.go:105] duration metric: took 178.94359ms to run NodePressure ...
	I1002 12:00:37.506722  384344 start.go:228] waiting for startup goroutines ...
	I1002 12:00:37.506728  384344 start.go:233] waiting for cluster config update ...
	I1002 12:00:37.506738  384344 start.go:242] writing updated cluster config ...
	I1002 12:00:37.506999  384344 ssh_runner.go:195] Run: rm -f paused
	I1002 12:00:37.558171  384344 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:00:37.560280  384344 out.go:177] * Done! kubectl is now configured to use "no-preload-304121" cluster and "default" namespace by default
	I1002 12:00:34.160478  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:34.160520  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:34.160528  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:34.160540  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:34.160553  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:34.160577  384505 retry.go:31] will retry after 8.056057378s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:42.223209  384505 system_pods.go:86] 5 kube-system pods found
	I1002 12:00:42.223242  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:42.223251  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Pending
	I1002 12:00:42.223257  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:42.223267  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:42.223276  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:42.223299  384505 retry.go:31] will retry after 9.279474557s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:51.510907  384505 system_pods.go:86] 6 kube-system pods found
	I1002 12:00:51.510937  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:51.510945  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:00:51.510949  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Pending
	I1002 12:00:51.510953  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:51.510959  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:51.510965  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:51.510995  384505 retry.go:31] will retry after 9.19295244s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:01:00.712167  384505 system_pods.go:86] 8 kube-system pods found
	I1002 12:01:00.712195  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:01:00.712201  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:01:00.712205  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Running
	I1002 12:01:00.712209  384505 system_pods.go:89] "kube-controller-manager-old-k8s-version-749860" [1531e118-f1f1-485e-b258-32e21b3385d8] Running
	I1002 12:01:00.712213  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:01:00.712217  384505 system_pods.go:89] "kube-scheduler-old-k8s-version-749860" [66983e5c-64ab-48ec-9c24-824f0a7cb36e] Running
	I1002 12:01:00.712223  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:01:00.712230  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:01:00.712237  384505 system_pods.go:126] duration metric: took 55.235875161s to wait for k8s-apps to be running ...
	I1002 12:01:00.712244  384505 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:01:00.712293  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:01:00.728970  384505 system_svc.go:56] duration metric: took 16.712185ms WaitForService to wait for kubelet.
	I1002 12:01:00.728999  384505 kubeadm.go:581] duration metric: took 1m5.625005524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:01:00.729026  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:01:00.733153  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:01:00.733180  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 12:01:00.733196  384505 node_conditions.go:105] duration metric: took 4.162147ms to run NodePressure ...
	I1002 12:01:00.733209  384505 start.go:228] waiting for startup goroutines ...
	I1002 12:01:00.733216  384505 start.go:233] waiting for cluster config update ...
	I1002 12:01:00.733230  384505 start.go:242] writing updated cluster config ...
	I1002 12:01:00.733553  384505 ssh_runner.go:195] Run: rm -f paused
	I1002 12:01:00.784237  384505 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 12:01:00.786178  384505 out.go:177] 
	W1002 12:01:00.787686  384505 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 12:01:00.789104  384505 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 12:01:00.790521  384505 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-749860" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:53:50 UTC, ends at Mon 2023-10-02 12:10:02 UTC. --
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.543176789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248602543158283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=4ec6c9cc-aea7-40c4-8fd1-d4607a5366cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.543833011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49ca7127-2438-4351-b916-285b5d5b06af name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.543883405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49ca7127-2438-4351-b916-285b5d5b06af name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.544055673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49ca7127-2438-4351-b916-285b5d5b06af name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.585961588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bbb083c4-92d6-4345-9e0b-c3535d06932a name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.586017520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bbb083c4-92d6-4345-9e0b-c3535d06932a name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.587476679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=14d490eb-42cd-46a7-aba4-453b33d9f51a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.587948214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248602587933997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=14d490eb-42cd-46a7-aba4-453b33d9f51a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.588956379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a7d60dd-da93-49bc-9e5b-8383ebeed7b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.588999639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a7d60dd-da93-49bc-9e5b-8383ebeed7b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.589193851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a7d60dd-da93-49bc-9e5b-8383ebeed7b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.633865981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=da4466e4-662f-486d-ae45-0fc20f0619be name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.633923800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=da4466e4-662f-486d-ae45-0fc20f0619be name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.635334127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=17519d06-25db-4e9d-a9db-5e538ca6d467 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.635884583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248602635867186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=17519d06-25db-4e9d-a9db-5e538ca6d467 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.636728064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0d2f7d4-94f9-4928-9ad5-e50cf27c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.636784644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0d2f7d4-94f9-4928-9ad5-e50cf27c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.637006465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0d2f7d4-94f9-4928-9ad5-e50cf27c0f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.674349062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2f8ebe90-b17b-4f03-9109-68731fbc7f9b name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.674432747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2f8ebe90-b17b-4f03-9109-68731fbc7f9b name=/runtime.v1.RuntimeService/Version
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.681364701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ddf12ab2-a75b-4474-a3b2-1fe4e701b3a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.681899373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248602681881558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=ddf12ab2-a75b-4474-a3b2-1fe4e701b3a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.684116396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=07eef640-cd60-4b32-bf7f-9ba472fc3dbd name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.684197373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=07eef640-cd60-4b32-bf7f-9ba472fc3dbd name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:10:02 old-k8s-version-749860 crio[715]: time="2023-10-02 12:10:02.684401247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=07eef640-cd60-4b32-bf7f-9ba472fc3dbd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	039738890c0c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   c387574f80128       storage-provisioner
	0bbbaa70397b8       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   63ed4ec3fc8a3       coredns-5644d7b6d9-7b9bb
	92b70651bc8fd       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   9cb28fe0eb66b       kube-proxy-mdtp5
	9171c6defa67e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   85f6c5d981253       etcd-old-k8s-version-749860
	27d21512e1f35       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   82a27613594c0       kube-scheduler-old-k8s-version-749860
	8394cafbfead7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   65e57f7a352a9       kube-apiserver-old-k8s-version-749860
	4799bd0c57b13       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   723ffac35945d       kube-controller-manager-old-k8s-version-749860
	
	* 
	* ==> coredns [0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a] <==
	* .:53
	2023-10-02T11:59:57.723Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	2023-10-02T11:59:57.724Z [INFO] CoreDNS-1.6.2
	2023-10-02T11:59:57.724Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-02T11:59:58.743Z [INFO] 127.0.0.1:44886 - 61132 "HINFO IN 8385809371994932739.3761755439345032964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022856664s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-749860
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-749860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=old-k8s-version-749860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:59:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:09:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:09:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:09:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:09:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.82
	  Hostname:    old-k8s-version-749860
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 be9b48f6bc7c4943a52c7e86d3eca20b
	 System UUID:                be9b48f6-bc7c-4943-a52c-7e86d3eca20b
	 Boot ID:                    fe9fde7a-fce1-478f-bc16-9c4054693c03
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-7b9bb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-749860                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                kube-apiserver-old-k8s-version-749860             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                kube-controller-manager-old-k8s-version-749860    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                kube-proxy-mdtp5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-749860             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                metrics-server-74d5856cc6-n7z95                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-749860  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.325492] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.441878] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153738] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.448486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000051] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.315526] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.131775] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.163930] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.128336] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.236769] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Oct 2 11:54] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.476621] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +21.474357] hrtimer: interrupt took 4032902 ns
	[  +3.948944] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.083454] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 2 11:59] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +0.803214] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 2 12:00] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5] <==
	* 2023-10-02 11:59:29.601856 I | raft: 8f4fcab0df4f7c44 became follower at term 0
	2023-10-02 11:59:29.601877 I | raft: newRaft 8f4fcab0df4f7c44 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-02 11:59:29.601892 I | raft: 8f4fcab0df4f7c44 became follower at term 1
	2023-10-02 11:59:29.612024 W | auth: simple token is not cryptographically signed
	2023-10-02 11:59:29.617130 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-02 11:59:29.619007 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 11:59:29.619274 I | embed: listening for metrics on http://192.168.83.82:2381
	2023-10-02 11:59:29.619543 I | etcdserver: 8f4fcab0df4f7c44 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 11:59:29.620083 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-02 11:59:29.620346 I | etcdserver/membership: added member 8f4fcab0df4f7c44 [https://192.168.83.82:2380] to cluster cf7ed821fb17c7fa
	2023-10-02 11:59:30.302490 I | raft: 8f4fcab0df4f7c44 is starting a new election at term 1
	2023-10-02 11:59:30.302715 I | raft: 8f4fcab0df4f7c44 became candidate at term 2
	2023-10-02 11:59:30.302913 I | raft: 8f4fcab0df4f7c44 received MsgVoteResp from 8f4fcab0df4f7c44 at term 2
	2023-10-02 11:59:30.302952 I | raft: 8f4fcab0df4f7c44 became leader at term 2
	2023-10-02 11:59:30.303074 I | raft: raft.node: 8f4fcab0df4f7c44 elected leader 8f4fcab0df4f7c44 at term 2
	2023-10-02 11:59:30.303439 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-02 11:59:30.305023 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-02 11:59:30.305268 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-02 11:59:30.305915 I | etcdserver: published {Name:old-k8s-version-749860 ClientURLs:[https://192.168.83.82:2379]} to cluster cf7ed821fb17c7fa
	2023-10-02 11:59:30.306042 I | embed: ready to serve client requests
	2023-10-02 11:59:30.306278 I | embed: ready to serve client requests
	2023-10-02 11:59:30.307496 I | embed: serving client requests on 192.168.83.82:2379
	2023-10-02 11:59:30.309285 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 12:09:30.335675 I | mvcc: store.index: compact 663
	2023-10-02 12:09:30.338005 I | mvcc: finished scheduled compaction at 663 (took 1.703277ms)
	
	* 
	* ==> kernel <==
	*  12:10:03 up 16 min,  0 users,  load average: 0.26, 0.23, 0.25
	Linux old-k8s-version-749860 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404] <==
	* I1002 12:02:58.215972       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:02:58.216078       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:02:58.216148       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:02:58.216159       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:04:34.783177       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:04:34.783252       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:04:34.783312       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:04:34.783319       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:05:34.783766       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:05:34.783945       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:05:34.784028       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:05:34.784038       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:07:34.784727       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:07:34.785393       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:07:34.785839       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:07:34.785993       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:09:34.786147       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:09:34.786764       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:09:34.786913       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:09:34.786995       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8] <==
	* W1002 12:03:39.206233       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:03:57.093306       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:04:11.208483       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:04:27.345494       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:04:43.210709       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:04:57.597714       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:05:15.213447       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:05:27.849855       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:05:47.215524       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:05:58.101782       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:06:19.217760       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:06:28.353746       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:06:51.219890       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:06:58.605730       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:07:23.222413       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:07:28.857756       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:07:55.224387       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:07:59.109666       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:08:27.226866       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:08:29.361899       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:08:59.228835       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:08:59.613794       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1002 12:09:29.865813       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:09:31.231150       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:10:00.117937       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855] <==
	* W1002 11:59:57.693438       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1002 11:59:57.718970       1 node.go:135] Successfully retrieved node IP: 192.168.83.82
	I1002 11:59:57.719042       1 server_others.go:149] Using iptables Proxier.
	I1002 11:59:57.731898       1 server.go:529] Version: v1.16.0
	I1002 11:59:57.732895       1 config.go:131] Starting endpoints config controller
	I1002 11:59:57.732953       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1002 11:59:57.732985       1 config.go:313] Starting service config controller
	I1002 11:59:57.733004       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1002 11:59:57.841095       1 shared_informer.go:204] Caches are synced for service config 
	I1002 11:59:57.841261       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474] <==
	* I1002 11:59:33.803312       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1002 11:59:33.803755       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1002 11:59:33.853523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:33.853743       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:59:33.853879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:33.853954       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:59:33.854321       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:33.854377       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:33.854412       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:33.854446       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:59:33.854493       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:33.856067       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:33.856175       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:34.855834       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:34.857659       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:59:34.859093       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:34.860530       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:59:34.863260       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:34.864881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:34.865077       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:34.866198       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:59:34.866760       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:34.867744       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:34.869050       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:54.779168       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:53:50 UTC, ends at Mon 2023-10-02 12:10:03 UTC. --
	Oct 02 12:05:36 old-k8s-version-749860 kubelet[3187]: E1002 12:05:36.279191    3187 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:05:36 old-k8s-version-749860 kubelet[3187]: E1002 12:05:36.279258    3187 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:05:36 old-k8s-version-749860 kubelet[3187]: E1002 12:05:36.279307    3187 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:05:36 old-k8s-version-749860 kubelet[3187]: E1002 12:05:36.279337    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 02 12:05:49 old-k8s-version-749860 kubelet[3187]: E1002 12:05:49.267252    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:06:03 old-k8s-version-749860 kubelet[3187]: E1002 12:06:03.267215    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:06:16 old-k8s-version-749860 kubelet[3187]: E1002 12:06:16.268250    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:06:30 old-k8s-version-749860 kubelet[3187]: E1002 12:06:30.268229    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:06:42 old-k8s-version-749860 kubelet[3187]: E1002 12:06:42.267257    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:06:54 old-k8s-version-749860 kubelet[3187]: E1002 12:06:54.267798    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:07:07 old-k8s-version-749860 kubelet[3187]: E1002 12:07:07.267550    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:07:22 old-k8s-version-749860 kubelet[3187]: E1002 12:07:22.268031    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:07:37 old-k8s-version-749860 kubelet[3187]: E1002 12:07:37.267140    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:07:49 old-k8s-version-749860 kubelet[3187]: E1002 12:07:49.267674    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:08:01 old-k8s-version-749860 kubelet[3187]: E1002 12:08:01.267517    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:08:16 old-k8s-version-749860 kubelet[3187]: E1002 12:08:16.268714    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:08:28 old-k8s-version-749860 kubelet[3187]: E1002 12:08:28.270942    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:08:42 old-k8s-version-749860 kubelet[3187]: E1002 12:08:42.269074    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:08:57 old-k8s-version-749860 kubelet[3187]: E1002 12:08:57.267036    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:09:09 old-k8s-version-749860 kubelet[3187]: E1002 12:09:09.267406    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:09:23 old-k8s-version-749860 kubelet[3187]: E1002 12:09:23.267513    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:09:26 old-k8s-version-749860 kubelet[3187]: E1002 12:09:26.413860    3187 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 02 12:09:35 old-k8s-version-749860 kubelet[3187]: E1002 12:09:35.267551    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:09:50 old-k8s-version-749860 kubelet[3187]: E1002 12:09:50.267270    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:10:01 old-k8s-version-749860 kubelet[3187]: E1002 12:10:01.267652    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d] <==
	* I1002 11:59:57.862445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:59:57.888114       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:59:57.888234       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:59:57.901481       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:59:57.904920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540!
	I1002 11:59:57.906017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"67313b9b-3b30-4b05-a538-a4ddd4744015", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540 became leader
	I1002 11:59:58.006001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-749860 -n old-k8s-version-749860
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-749860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-n7z95
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95: exit status 1 (75.243756ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-n7z95" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (417.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:15:33.099395027 +0000 UTC m=+5983.671081071
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-777999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.007µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-777999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-777999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-777999 logs -n 25: (1.276203357s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:14 UTC |
	| start   | -p newest-cni-929075 --memory=2200 --alsologtostderr   | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:14 UTC |
	| addons  | enable metrics-server -p newest-cni-929075             | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	| stop    | -p newest-cni-929075                                   | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-929075                  | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC | 02 Oct 23 12:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-929075 --memory=2200 --alsologtostderr   | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:15:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:15:17.784995  390828 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:15:17.785091  390828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:17.785099  390828 out.go:309] Setting ErrFile to fd 2...
	I1002 12:15:17.785104  390828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:15:17.785312  390828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 12:15:17.785854  390828 out.go:303] Setting JSON to false
	I1002 12:15:17.786926  390828 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10664,"bootTime":1696238254,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 12:15:17.786982  390828 start.go:138] virtualization: kvm guest
	I1002 12:15:17.789213  390828 out.go:177] * [newest-cni-929075] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 12:15:17.790698  390828 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:15:17.792161  390828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:15:17.790741  390828 notify.go:220] Checking for updates...
	I1002 12:15:17.794902  390828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:15:17.796419  390828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:15:17.797891  390828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 12:15:17.799225  390828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:15:17.801019  390828 config.go:182] Loaded profile config "newest-cni-929075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:15:17.801449  390828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:17.801506  390828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:17.816068  390828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I1002 12:15:17.816467  390828 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:17.817045  390828 main.go:141] libmachine: Using API Version  1
	I1002 12:15:17.817068  390828 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:17.817390  390828 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:17.817641  390828 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:15:17.817849  390828 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:15:17.818182  390828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:17.818224  390828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:17.832547  390828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I1002 12:15:17.832906  390828 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:17.833356  390828 main.go:141] libmachine: Using API Version  1
	I1002 12:15:17.833379  390828 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:17.833676  390828 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:17.833867  390828 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:15:17.870157  390828 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 12:15:17.871694  390828 start.go:298] selected driver: kvm2
	I1002 12:15:17.871713  390828 start.go:902] validating driver "kvm2" against &{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.146 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:fal
se system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:15:17.871813  390828 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:15:17.872506  390828 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:15:17.872594  390828 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 12:15:17.887514  390828 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 12:15:17.887862  390828 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 12:15:17.887896  390828 cni.go:84] Creating CNI manager for ""
	I1002 12:15:17.887907  390828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:15:17.887926  390828 start_flags.go:321] config:
	{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.146 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:15:17.888071  390828 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:15:17.890132  390828 out.go:177] * Starting control plane node newest-cni-929075 in cluster newest-cni-929075
	I1002 12:15:17.891801  390828 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:15:17.891846  390828 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 12:15:17.891854  390828 cache.go:57] Caching tarball of preloaded images
	I1002 12:15:17.891939  390828 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 12:15:17.891950  390828 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:15:17.892056  390828 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json ...
	I1002 12:15:17.892226  390828 start.go:365] acquiring machines lock for newest-cni-929075: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 12:15:17.892272  390828 start.go:369] acquired machines lock for "newest-cni-929075" in 27.472µs
	I1002 12:15:17.892286  390828 start.go:96] Skipping create...Using existing machine configuration
	I1002 12:15:17.892294  390828 fix.go:54] fixHost starting: 
	I1002 12:15:17.892591  390828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:17.892628  390828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:17.910409  390828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I1002 12:15:17.910846  390828 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:17.911423  390828 main.go:141] libmachine: Using API Version  1
	I1002 12:15:17.911469  390828 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:17.911808  390828 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:17.912009  390828 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:15:17.912186  390828 main.go:141] libmachine: (newest-cni-929075) Calling .GetState
	I1002 12:15:17.913922  390828 fix.go:102] recreateIfNeeded on newest-cni-929075: state=Stopped err=<nil>
	I1002 12:15:17.913962  390828 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	W1002 12:15:17.914141  390828 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 12:15:17.916218  390828 out.go:177] * Restarting existing kvm2 VM for "newest-cni-929075" ...
	I1002 12:15:17.917676  390828 main.go:141] libmachine: (newest-cni-929075) Calling .Start
	I1002 12:15:17.917851  390828 main.go:141] libmachine: (newest-cni-929075) Ensuring networks are active...
	I1002 12:15:17.918621  390828 main.go:141] libmachine: (newest-cni-929075) Ensuring network default is active
	I1002 12:15:17.918950  390828 main.go:141] libmachine: (newest-cni-929075) Ensuring network mk-newest-cni-929075 is active
	I1002 12:15:17.919277  390828 main.go:141] libmachine: (newest-cni-929075) Getting domain xml...
	I1002 12:15:17.920060  390828 main.go:141] libmachine: (newest-cni-929075) Creating domain...
	I1002 12:15:19.175918  390828 main.go:141] libmachine: (newest-cni-929075) Waiting to get IP...
	I1002 12:15:19.176767  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:19.177155  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:19.177237  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:19.177123  390863 retry.go:31] will retry after 229.882553ms: waiting for machine to come up
	I1002 12:15:19.408724  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:19.409323  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:19.409355  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:19.409291  390863 retry.go:31] will retry after 298.583203ms: waiting for machine to come up
	I1002 12:15:19.709901  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:19.710427  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:19.710462  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:19.710382  390863 retry.go:31] will retry after 381.690285ms: waiting for machine to come up
	I1002 12:15:20.093987  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:20.094558  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:20.094594  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:20.094500  390863 retry.go:31] will retry after 562.253634ms: waiting for machine to come up
	I1002 12:15:20.657894  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:20.658340  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:20.658386  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:20.658284  390863 retry.go:31] will retry after 690.926968ms: waiting for machine to come up
	I1002 12:15:21.351465  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:21.351957  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:21.352001  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:21.351897  390863 retry.go:31] will retry after 941.759797ms: waiting for machine to come up
	I1002 12:15:22.295597  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:22.296075  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:22.296106  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:22.296036  390863 retry.go:31] will retry after 866.440397ms: waiting for machine to come up
	I1002 12:15:23.165004  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:23.165460  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:23.165492  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:23.165425  390863 retry.go:31] will retry after 1.083425926s: waiting for machine to come up
	I1002 12:15:24.250655  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:24.251147  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:24.251183  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:24.251089  390863 retry.go:31] will retry after 1.214752963s: waiting for machine to come up
	I1002 12:15:25.467532  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:25.467917  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:25.467943  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:25.467870  390863 retry.go:31] will retry after 1.454775605s: waiting for machine to come up
	I1002 12:15:26.924703  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:26.925239  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:26.925267  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:26.925178  390863 retry.go:31] will retry after 2.669281542s: waiting for machine to come up
	I1002 12:15:29.597102  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:29.597668  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:29.597701  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:29.597606  390863 retry.go:31] will retry after 2.921142451s: waiting for machine to come up
	I1002 12:15:32.522870  390828 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:15:32.523316  390828 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:15:32.523349  390828 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:15:32.523241  390863 retry.go:31] will retry after 3.077250349s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:31 UTC, ends at Mon 2023-10-02 12:15:33 UTC. --
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.755956013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248933755935184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3d49427c-98ae-4c74-a57c-7547f4ca42b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.756661484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=630e648b-4a59-40f3-ac4a-9487be33dcab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.756744428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=630e648b-4a59-40f3-ac4a-9487be33dcab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.756989325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=630e648b-4a59-40f3-ac4a-9487be33dcab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.809876753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5d97bfcb-91d1-4702-9cf4-9c44315d5f48 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.809992247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5d97bfcb-91d1-4702-9cf4-9c44315d5f48 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.812233081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=901ab160-89d5-434e-bc55-e4f0e9a20b25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.813073574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248933813050974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=901ab160-89d5-434e-bc55-e4f0e9a20b25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.814150759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5205bc0-cd7f-4590-87d0-b531bde5355b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.814248936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5205bc0-cd7f-4590-87d0-b531bde5355b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.814525826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5205bc0-cd7f-4590-87d0-b531bde5355b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.862509107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b56ee855-1779-4a4e-a9d2-f7877d3f7127 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.862687132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b56ee855-1779-4a4e-a9d2-f7877d3f7127 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.864142308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=84a6d8ed-b565-453d-8437-55b1e4325f91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.864787278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248933864760211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=84a6d8ed-b565-453d-8437-55b1e4325f91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.865426225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=482465f1-7291-40f5-aa33-f8f3655f50b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.865648548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=482465f1-7291-40f5-aa33-f8f3655f50b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.865939702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=482465f1-7291-40f5-aa33-f8f3655f50b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.920995487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8cb2f75d-e0dc-40ef-b534-a1d9c522265f name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.921077300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8cb2f75d-e0dc-40ef-b534-a1d9c522265f name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.923064745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aa09ccb5-627b-4845-95d9-622f4a103213 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.923720773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248933923699612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aa09ccb5-627b-4845-95d9-622f4a103213 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.924338534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ade5d736-ce47-4ad8-9ef8-4618415348f2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.924412639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ade5d736-ce47-4ad8-9ef8-4618415348f2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:33 default-k8s-diff-port-777999 crio[725]: time="2023-10-02 12:15:33.924748848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247739102202412,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5279fe29e84fdd82d6b51df85dae9eee1dbcebf796c57ae25a534c2fd0917e20,PodSandboxId:30b85178c495cbd0c0b024ab2e5376342b1163d573e7547ded3790428a86401a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696247718731002733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e7f8435-3c92-447f-ad2c-c3e7da52e094,},Annotations:map[string]string{io.kubernetes.container.hash: ca9ef9ea,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d,PodSandboxId:5c05ef8d8ce5e67397f24f906378abaf1f6e1c89026fae43140d17e167998470,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247715793524794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9wv56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f04d6125-ea28-41cc-9251-7ccee27162bc,},Annotations:map[string]string{io.kubernetes.container.hash: ba540247,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358,PodSandboxId:75e776743a341eabaacb9b9dd17a0623dc18a78bd8dd09443cba6b70274f410b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696247707982939452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: aff1275b-909d-4c70-9fb5-cb36170c591e,},Annotations:map[string]string{io.kubernetes.container.hash: da3ea7ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6,PodSandboxId:a4855a476b0c0b89510140f6b9ddc93bd6fb12ae14434a0c224de68939ad5ae0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247707996357745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gchnc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
61811c7-2ac8-448a-b441-838f9aaf9145,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac3a542,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d,PodSandboxId:f8358a2d762d9c82f567b63310a724a21f33c3b7b555251edf79a3a3c1fbf920,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247700773032427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66c9b627bcf9a6af934f21fc5eb0505,},An
notations:map[string]string{io.kubernetes.container.hash: ca6b94bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e,PodSandboxId:48c194167752c2af879969befa6fefc77bc9effbc59909f196e991842ce6396c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247700475133987,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7670623f64278461b660148b22f51806,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735,PodSandboxId:a66bf166b0a00e30f0bed46517ba0818e740673dc653ad0af823a7204fc0675e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247700040020331,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c3d20afce7e1c07e31633c2522947a,},An
notations:map[string]string{io.kubernetes.container.hash: c90fcb7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f,PodSandboxId:fd604b3ba21697a91242f479bab00b84059e167a31a9e44f747b982d01824ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247700012459103,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-777999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
0d4795119f3d4d980acb130288fbaca,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ade5d736-ce47-4ad8-9ef8-4618415348f2 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d3596d8e4114       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   75e776743a341       storage-provisioner
	5279fe29e84fd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   30b85178c495c       busybox
	f4357b618abec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   5c05ef8d8ce5e       coredns-5dd5756b68-9wv56
	d858d8eba37bc       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      20 minutes ago      Running             kube-proxy                1                   a4855a476b0c0       kube-proxy-gchnc
	b5dd54a6498cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   75e776743a341       storage-provisioner
	8b9af145fa743       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   f8358a2d762d9       etcd-default-k8s-diff-port-777999
	7a5a17cf18027       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      20 minutes ago      Running             kube-scheduler            1                   48c194167752c       kube-scheduler-default-k8s-diff-port-777999
	3d34e284efffd       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      20 minutes ago      Running             kube-apiserver            1                   a66bf166b0a00       kube-apiserver-default-k8s-diff-port-777999
	beb885cf3eedd       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      20 minutes ago      Running             kube-controller-manager   1                   fd604b3ba2169       kube-controller-manager-default-k8s-diff-port-777999
	
	* 
	* ==> coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39911 - 12955 "HINFO IN 5381547072923470623.3344521106857374535. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.062853859s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-777999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-777999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=default-k8s-diff-port-777999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_46_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:46:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-777999
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:15:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:10:54 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:10:54 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:10:54 +0000   Mon, 02 Oct 2023 11:46:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:10:54 +0000   Mon, 02 Oct 2023 11:55:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.251
	  Hostname:    default-k8s-diff-port-777999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8779df539c584f4fa2a1664ce1ea848f
	  System UUID:                8779df53-9c58-4f4f-a2a1-664ce1ea848f
	  Boot ID:                    8e86307d-4f39-4d78-b17c-0c82039497a9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-9wv56                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-777999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-777999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-777999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-gchnc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-777999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-wk2c7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-777999 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-777999 event: Registered Node default-k8s-diff-port-777999 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-777999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-777999 event: Registered Node default-k8s-diff-port-777999 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571556] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387820] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152188] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.638291] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.009684] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.122997] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.179985] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.135871] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.278204] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.876847] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[Oct 2 11:55] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] <==
	* {"level":"warn","ts":"2023-10-02T11:55:09.045526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.38932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3712"}
	{"level":"info","ts":"2023-10-02T11:55:09.045636Z","caller":"traceutil/trace.go:171","msg":"trace[136419149] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:576; }","duration":"109.422053ms","start":"2023-10-02T11:55:08.936123Z","end":"2023-10-02T11:55:09.045545Z","steps":["trace[136419149] 'agreement among raft nodes before linearized reading'  (duration: 109.353073ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.476883Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.15892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788\" ","response":"range_response_count:1 size:984"}
	{"level":"info","ts":"2023-10-02T11:55:09.477094Z","caller":"traceutil/trace.go:171","msg":"trace[2113176847] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788; range_end:; response_count:1; response_revision:576; }","duration":"425.378628ms","start":"2023-10-02T11:55:09.051696Z","end":"2023-10-02T11:55:09.477074Z","steps":["trace[2113176847] 'range keys from in-memory index tree'  (duration: 425.081072ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.477175Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.051682Z","time spent":"425.473682ms","remote":"127.0.0.1:55534","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":1007,"request content":"key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-wk2c7.178a484d60a9b788\" "}
	{"level":"info","ts":"2023-10-02T11:55:09.477881Z","caller":"traceutil/trace.go:171","msg":"trace[490392895] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"405.77717ms","start":"2023-10-02T11:55:09.072093Z","end":"2023-10-02T11:55:09.47787Z","steps":["trace[490392895] 'read index received'  (duration: 405.651106ms)","trace[490392895] 'applied index is now lower than readState.Index'  (duration: 125.499µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-02T11:55:09.478044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.985597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2023-10-02T11:55:09.478198Z","caller":"traceutil/trace.go:171","msg":"trace[883365813] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999; range_end:; response_count:1; response_revision:577; }","duration":"406.143763ms","start":"2023-10-02T11:55:09.072043Z","end":"2023-10-02T11:55:09.478187Z","steps":["trace[883365813] 'agreement among raft nodes before linearized reading'  (duration: 405.907465ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.478262Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.072029Z","time spent":"406.218449ms","remote":"127.0.0.1:55558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4369,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-777999\" "}
	{"level":"info","ts":"2023-10-02T11:55:09.47849Z","caller":"traceutil/trace.go:171","msg":"trace[1710003894] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"426.069524ms","start":"2023-10-02T11:55:09.052411Z","end":"2023-10-02T11:55:09.478481Z","steps":["trace[1710003894] 'process raft request'  (duration: 425.37676ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.479211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"427.269468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-10-02T11:55:09.479312Z","caller":"traceutil/trace.go:171","msg":"trace[975818954] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:576; }","duration":"427.411188ms","start":"2023-10-02T11:55:09.05189Z","end":"2023-10-02T11:55:09.479301Z","steps":["trace[975818954] 'range keys from in-memory index tree'  (duration: 424.808247ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T11:55:09.47935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.051885Z","time spent":"427.451698ms","remote":"127.0.0.1:55562","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"warn","ts":"2023-10-02T11:55:09.479089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T11:55:09.052388Z","time spent":"426.16085ms","remote":"127.0.0.1:55558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3558,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:552 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3504 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2023-10-02T11:55:10.630101Z","caller":"traceutil/trace.go:171","msg":"trace[584673202] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"149.659967ms","start":"2023-10-02T11:55:10.480417Z","end":"2023-10-02T11:55:10.630077Z","steps":["trace[584673202] 'process raft request'  (duration: 149.429974ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T11:55:19.68063Z","caller":"traceutil/trace.go:171","msg":"trace[1514956505] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"103.629996ms","start":"2023-10-02T11:55:19.576917Z","end":"2023-10-02T11:55:19.680547Z","steps":["trace[1514956505] 'process raft request'  (duration: 100.565461ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T12:05:04.558744Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2023-10-02T12:05:04.561665Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":862,"took":"2.615615ms","hash":2110775100}
	{"level":"info","ts":"2023-10-02T12:05:04.561745Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2110775100,"revision":862,"compact-revision":-1}
	{"level":"info","ts":"2023-10-02T12:10:04.567234Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1104}
	{"level":"info","ts":"2023-10-02T12:10:04.569913Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1104,"took":"1.848187ms","hash":542326494}
	{"level":"info","ts":"2023-10-02T12:10:04.57027Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":542326494,"revision":1104,"compact-revision":862}
	{"level":"info","ts":"2023-10-02T12:15:04.579803Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2023-10-02T12:15:04.581726Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1348,"took":"1.371333ms","hash":2265998929}
	{"level":"info","ts":"2023-10-02T12:15:04.581805Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2265998929,"revision":1348,"compact-revision":1104}
	
	* 
	* ==> kernel <==
	*  12:15:34 up 21 min,  0 users,  load average: 0.33, 0.17, 0.11
	Linux default-k8s-diff-port-777999 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] <==
	* E1002 12:11:07.266368       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:11:07.266403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:12:06.100253       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:13:06.100201       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:13:07.265874       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:13:07.265938       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:13:07.265957       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:13:07.267051       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:13:07.267193       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:13:07.267225       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:14:06.100353       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:15:06.100351       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:15:06.270149       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:15:06.270299       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:15:06.271048       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:15:07.270685       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:15:07.270793       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:15:07.270803       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:15:07.270900       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:15:07.270981       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:15:07.272004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] <==
	* I1002 12:09:50.470334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:19.944519       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:20.480270       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:49.951206       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:50.492317       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:11:11.894361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="362.325µs"
	E1002 12:11:19.959076       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:20.501903       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:11:26.895365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.725µs"
	E1002 12:11:49.964550       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:50.510929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:12:19.970371       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:20.521326       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:12:49.976212       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:50.530083       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:19.982425       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:20.540046       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:49.989268       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:50.549716       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:19.996369       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:20.559365       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:50.002992       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:50.568653       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:15:20.009131       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:15:20.578497       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] <==
	* I1002 11:55:08.524683       1 server_others.go:69] "Using iptables proxy"
	I1002 11:55:08.547494       1 node.go:141] Successfully retrieved node IP: 192.168.61.251
	I1002 11:55:08.615800       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:55:08.615898       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:55:08.619258       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:55:08.619355       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:55:08.619640       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:55:08.619745       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:55:08.620750       1 config.go:188] "Starting service config controller"
	I1002 11:55:08.620830       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:55:08.620877       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:55:08.620904       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:55:08.623856       1 config.go:315] "Starting node config controller"
	I1002 11:55:08.623982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:55:08.721793       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:55:08.729486       1 shared_informer.go:318] Caches are synced for service config
	I1002 11:55:08.729520       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] <==
	* I1002 11:55:03.240993       1 serving.go:348] Generated self-signed cert in-memory
	W1002 11:55:06.188368       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 11:55:06.188433       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:55:06.188449       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 11:55:06.188466       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 11:55:06.269359       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 11:55:06.269469       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:55:06.279840       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 11:55:06.279956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 11:55:06.279904       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 11:55:06.282479       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 11:55:06.382814       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:31 UTC, ends at Mon 2023-10-02 12:15:34 UTC. --
	Oct 02 12:12:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:12:58.898399     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:12:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:12:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:12:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:13:09 default-k8s-diff-port-777999 kubelet[929]: E1002 12:13:09.875784     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:13:21 default-k8s-diff-port-777999 kubelet[929]: E1002 12:13:21.876063     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:13:36 default-k8s-diff-port-777999 kubelet[929]: E1002 12:13:36.875839     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:13:51 default-k8s-diff-port-777999 kubelet[929]: E1002 12:13:51.876166     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:13:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:13:58.901393     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:13:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:13:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:13:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:14:04 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:04.876100     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:14:15 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:15.876313     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:14:26 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:26.876440     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:14:38 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:38.879940     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:14:51 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:51.875732     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:14:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:58.900672     929 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:14:58 default-k8s-diff-port-777999 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:14:58 default-k8s-diff-port-777999 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:14:58 default-k8s-diff-port-777999 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:14:58 default-k8s-diff-port-777999 kubelet[929]: E1002 12:14:58.909929     929 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 02 12:15:04 default-k8s-diff-port-777999 kubelet[929]: E1002 12:15:04.879709     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:15:19 default-k8s-diff-port-777999 kubelet[929]: E1002 12:15:19.875247     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	Oct 02 12:15:30 default-k8s-diff-port-777999 kubelet[929]: E1002 12:15:30.877145     929 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wk2c7" podUID="f28e9db7-2182-40d8-85a7-fa40c2ff8077"
	
	* 
	* ==> storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] <==
	* I1002 11:55:39.232409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:55:39.244170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:55:39.244296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:55:56.658412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:55:56.659382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900!
	I1002 11:55:56.658799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7409c62f-4559-4b58-9abe-58b34486fa7c", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900 became leader
	I1002 11:55:56.759621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-777999_0815ca29-5dd5-4b61-9673-bb7301a61900!
	
	* 
	* ==> storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] <==
	* I1002 11:55:08.425436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 11:55:38.446943       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wk2c7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7: exit status 1 (62.543922ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wk2c7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-777999 describe pod metrics-server-57f55c9bc5-wk2c7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (417.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 12:09:04.535117  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 12:09:14.659930  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 12:09:15.317458  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 12:09:26.888798  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 12:09:34.099599  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487027 -n embed-certs-487027
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:15:06.646993926 +0000 UTC m=+5957.218679968
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-487027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-487027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.267µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-487027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-487027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-487027 logs -n 25: (1.337988557s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:14 UTC |
	| start   | -p newest-cni-929075 --memory=2200 --alsologtostderr   | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:14 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:14:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:14:07.439143  389933 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:14:07.439473  389933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:14:07.439483  389933 out.go:309] Setting ErrFile to fd 2...
	I1002 12:14:07.439488  389933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:14:07.439684  389933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 12:14:07.440279  389933 out.go:303] Setting JSON to false
	I1002 12:14:07.441360  389933 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10594,"bootTime":1696238254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 12:14:07.441422  389933 start.go:138] virtualization: kvm guest
	I1002 12:14:07.444787  389933 out.go:177] * [newest-cni-929075] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 12:14:07.446411  389933 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:14:07.446476  389933 notify.go:220] Checking for updates...
	I1002 12:14:07.449580  389933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:14:07.450911  389933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:14:07.452194  389933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:07.453414  389933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 12:14:07.454543  389933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:14:07.456240  389933 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456362  389933 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456466  389933 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456615  389933 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:14:07.493860  389933 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 12:14:07.495173  389933 start.go:298] selected driver: kvm2
	I1002 12:14:07.495188  389933 start.go:902] validating driver "kvm2" against <nil>
	I1002 12:14:07.495200  389933 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:14:07.495927  389933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:14:07.496008  389933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 12:14:07.511711  389933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 12:14:07.511760  389933 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W1002 12:14:07.511816  389933 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 12:14:07.512018  389933 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 12:14:07.512054  389933 cni.go:84] Creating CNI manager for ""
	I1002 12:14:07.512064  389933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:14:07.512071  389933 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 12:14:07.512081  389933 start_flags.go:321] config:
	{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:14:07.512229  389933 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:14:07.514590  389933 out.go:177] * Starting control plane node newest-cni-929075 in cluster newest-cni-929075
	I1002 12:14:07.516088  389933 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:14:07.516136  389933 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 12:14:07.516144  389933 cache.go:57] Caching tarball of preloaded images
	I1002 12:14:07.516233  389933 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 12:14:07.516243  389933 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:14:07.516334  389933 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json ...
	I1002 12:14:07.516351  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json: {Name:mk63314271bc9ebe46627fccddb5cde06b2b76f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:07.516509  389933 start.go:365] acquiring machines lock for newest-cni-929075: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 12:14:07.516538  389933 start.go:369] acquired machines lock for "newest-cni-929075" in 15.906µs
	I1002 12:14:07.516554  389933 start.go:93] Provisioning new machine with config: &{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:14:07.516620  389933 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 12:14:07.518428  389933 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 12:14:07.518569  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:14:07.518614  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:14:07.532909  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I1002 12:14:07.533375  389933 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:14:07.533907  389933 main.go:141] libmachine: Using API Version  1
	I1002 12:14:07.533935  389933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:14:07.534400  389933 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:14:07.534626  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:07.534790  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:07.534948  389933 start.go:159] libmachine.API.Create for "newest-cni-929075" (driver="kvm2")
	I1002 12:14:07.534983  389933 client.go:168] LocalClient.Create starting
	I1002 12:14:07.535028  389933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 12:14:07.535070  389933 main.go:141] libmachine: Decoding PEM data...
	I1002 12:14:07.535094  389933 main.go:141] libmachine: Parsing certificate...
	I1002 12:14:07.535164  389933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 12:14:07.535192  389933 main.go:141] libmachine: Decoding PEM data...
	I1002 12:14:07.535211  389933 main.go:141] libmachine: Parsing certificate...
	I1002 12:14:07.535234  389933 main.go:141] libmachine: Running pre-create checks...
	I1002 12:14:07.535248  389933 main.go:141] libmachine: (newest-cni-929075) Calling .PreCreateCheck
	I1002 12:14:07.535621  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:07.536056  389933 main.go:141] libmachine: Creating machine...
	I1002 12:14:07.536077  389933 main.go:141] libmachine: (newest-cni-929075) Calling .Create
	I1002 12:14:07.536231  389933 main.go:141] libmachine: (newest-cni-929075) Creating KVM machine...
	I1002 12:14:07.537645  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found existing default KVM network
	I1002 12:14:07.539320  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.539156  389957 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:42:a7} reservation:<nil>}
	I1002 12:14:07.540460  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.540376  389957 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:03:3f} reservation:<nil>}
	I1002 12:14:07.541295  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.541188  389957 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:22:23} reservation:<nil>}
	I1002 12:14:07.542522  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.542440  389957 network.go:214] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:69:64:a9} reservation:<nil>}
	I1002 12:14:07.545038  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.544938  389957 network.go:209] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00048a610}
	I1002 12:14:07.553863  389933 main.go:141] libmachine: (newest-cni-929075) DBG | trying to create private KVM network mk-newest-cni-929075 192.168.83.0/24...
	I1002 12:14:07.634907  389933 main.go:141] libmachine: (newest-cni-929075) DBG | private KVM network mk-newest-cni-929075 192.168.83.0/24 created
	I1002 12:14:07.634951  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.634866  389957 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:07.634969  389933 main.go:141] libmachine: (newest-cni-929075) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 ...
	I1002 12:14:07.634988  389933 main.go:141] libmachine: (newest-cni-929075) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 12:14:07.635149  389933 main.go:141] libmachine: (newest-cni-929075) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 12:14:07.883096  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.882924  389957 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa...
	I1002 12:14:08.199859  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:08.199732  389957 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/newest-cni-929075.rawdisk...
	I1002 12:14:08.199896  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Writing magic tar header
	I1002 12:14:08.199913  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Writing SSH key tar header
	I1002 12:14:08.199927  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:08.199875  389957 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 ...
	I1002 12:14:08.200086  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075
	I1002 12:14:08.200108  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 12:14:08.200132  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 (perms=drwx------)
	I1002 12:14:08.200153  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 12:14:08.200175  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 12:14:08.200193  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 12:14:08.200217  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:08.200233  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 12:14:08.200251  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 12:14:08.200265  389933 main.go:141] libmachine: (newest-cni-929075) Creating domain...
	I1002 12:14:08.200299  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 12:14:08.200331  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 12:14:08.200368  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins
	I1002 12:14:08.200386  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home
	I1002 12:14:08.200397  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Skipping /home - not owner
	I1002 12:14:08.201613  389933 main.go:141] libmachine: (newest-cni-929075) define libvirt domain using xml: 
	I1002 12:14:08.201636  389933 main.go:141] libmachine: (newest-cni-929075) <domain type='kvm'>
	I1002 12:14:08.201648  389933 main.go:141] libmachine: (newest-cni-929075)   <name>newest-cni-929075</name>
	I1002 12:14:08.201666  389933 main.go:141] libmachine: (newest-cni-929075)   <memory unit='MiB'>2200</memory>
	I1002 12:14:08.201681  389933 main.go:141] libmachine: (newest-cni-929075)   <vcpu>2</vcpu>
	I1002 12:14:08.201690  389933 main.go:141] libmachine: (newest-cni-929075)   <features>
	I1002 12:14:08.201704  389933 main.go:141] libmachine: (newest-cni-929075)     <acpi/>
	I1002 12:14:08.201717  389933 main.go:141] libmachine: (newest-cni-929075)     <apic/>
	I1002 12:14:08.201731  389933 main.go:141] libmachine: (newest-cni-929075)     <pae/>
	I1002 12:14:08.201743  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.201758  389933 main.go:141] libmachine: (newest-cni-929075)   </features>
	I1002 12:14:08.201774  389933 main.go:141] libmachine: (newest-cni-929075)   <cpu mode='host-passthrough'>
	I1002 12:14:08.201788  389933 main.go:141] libmachine: (newest-cni-929075)   
	I1002 12:14:08.201797  389933 main.go:141] libmachine: (newest-cni-929075)   </cpu>
	I1002 12:14:08.201811  389933 main.go:141] libmachine: (newest-cni-929075)   <os>
	I1002 12:14:08.201824  389933 main.go:141] libmachine: (newest-cni-929075)     <type>hvm</type>
	I1002 12:14:08.201839  389933 main.go:141] libmachine: (newest-cni-929075)     <boot dev='cdrom'/>
	I1002 12:14:08.201852  389933 main.go:141] libmachine: (newest-cni-929075)     <boot dev='hd'/>
	I1002 12:14:08.201866  389933 main.go:141] libmachine: (newest-cni-929075)     <bootmenu enable='no'/>
	I1002 12:14:08.201879  389933 main.go:141] libmachine: (newest-cni-929075)   </os>
	I1002 12:14:08.201893  389933 main.go:141] libmachine: (newest-cni-929075)   <devices>
	I1002 12:14:08.201907  389933 main.go:141] libmachine: (newest-cni-929075)     <disk type='file' device='cdrom'>
	I1002 12:14:08.201927  389933 main.go:141] libmachine: (newest-cni-929075)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/boot2docker.iso'/>
	I1002 12:14:08.201941  389933 main.go:141] libmachine: (newest-cni-929075)       <target dev='hdc' bus='scsi'/>
	I1002 12:14:08.201955  389933 main.go:141] libmachine: (newest-cni-929075)       <readonly/>
	I1002 12:14:08.201968  389933 main.go:141] libmachine: (newest-cni-929075)     </disk>
	I1002 12:14:08.201984  389933 main.go:141] libmachine: (newest-cni-929075)     <disk type='file' device='disk'>
	I1002 12:14:08.202000  389933 main.go:141] libmachine: (newest-cni-929075)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 12:14:08.202020  389933 main.go:141] libmachine: (newest-cni-929075)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/newest-cni-929075.rawdisk'/>
	I1002 12:14:08.202032  389933 main.go:141] libmachine: (newest-cni-929075)       <target dev='hda' bus='virtio'/>
	I1002 12:14:08.202042  389933 main.go:141] libmachine: (newest-cni-929075)     </disk>
	I1002 12:14:08.202053  389933 main.go:141] libmachine: (newest-cni-929075)     <interface type='network'>
	I1002 12:14:08.202070  389933 main.go:141] libmachine: (newest-cni-929075)       <source network='mk-newest-cni-929075'/>
	I1002 12:14:08.202085  389933 main.go:141] libmachine: (newest-cni-929075)       <model type='virtio'/>
	I1002 12:14:08.202099  389933 main.go:141] libmachine: (newest-cni-929075)     </interface>
	I1002 12:14:08.202113  389933 main.go:141] libmachine: (newest-cni-929075)     <interface type='network'>
	I1002 12:14:08.202128  389933 main.go:141] libmachine: (newest-cni-929075)       <source network='default'/>
	I1002 12:14:08.202142  389933 main.go:141] libmachine: (newest-cni-929075)       <model type='virtio'/>
	I1002 12:14:08.202156  389933 main.go:141] libmachine: (newest-cni-929075)     </interface>
	I1002 12:14:08.202168  389933 main.go:141] libmachine: (newest-cni-929075)     <serial type='pty'>
	I1002 12:14:08.202183  389933 main.go:141] libmachine: (newest-cni-929075)       <target port='0'/>
	I1002 12:14:08.202195  389933 main.go:141] libmachine: (newest-cni-929075)     </serial>
	I1002 12:14:08.202210  389933 main.go:141] libmachine: (newest-cni-929075)     <console type='pty'>
	I1002 12:14:08.202224  389933 main.go:141] libmachine: (newest-cni-929075)       <target type='serial' port='0'/>
	I1002 12:14:08.202238  389933 main.go:141] libmachine: (newest-cni-929075)     </console>
	I1002 12:14:08.202250  389933 main.go:141] libmachine: (newest-cni-929075)     <rng model='virtio'>
	I1002 12:14:08.202263  389933 main.go:141] libmachine: (newest-cni-929075)       <backend model='random'>/dev/random</backend>
	I1002 12:14:08.202276  389933 main.go:141] libmachine: (newest-cni-929075)     </rng>
	I1002 12:14:08.202289  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.202301  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.202316  389933 main.go:141] libmachine: (newest-cni-929075)   </devices>
	I1002 12:14:08.202328  389933 main.go:141] libmachine: (newest-cni-929075) </domain>
	I1002 12:14:08.202343  389933 main.go:141] libmachine: (newest-cni-929075) 
	I1002 12:14:08.210926  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2a:50:5e in network default
	I1002 12:14:08.211545  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring networks are active...
	I1002 12:14:08.211575  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:08.212174  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring network default is active
	I1002 12:14:08.212446  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring network mk-newest-cni-929075 is active
	I1002 12:14:08.212926  389933 main.go:141] libmachine: (newest-cni-929075) Getting domain xml...
	I1002 12:14:08.213605  389933 main.go:141] libmachine: (newest-cni-929075) Creating domain...
	I1002 12:14:09.504333  389933 main.go:141] libmachine: (newest-cni-929075) Waiting to get IP...
	I1002 12:14:09.505102  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:09.505550  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:09.505641  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:09.505564  389957 retry.go:31] will retry after 308.145581ms: waiting for machine to come up
	I1002 12:14:09.815075  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:09.815632  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:09.815663  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:09.815595  389957 retry.go:31] will retry after 328.787137ms: waiting for machine to come up
	I1002 12:14:10.145981  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:10.146494  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:10.146528  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:10.146413  389957 retry.go:31] will retry after 362.041752ms: waiting for machine to come up
	I1002 12:14:10.509644  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:10.510094  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:10.510129  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:10.510038  389957 retry.go:31] will retry after 514.710376ms: waiting for machine to come up
	I1002 12:14:11.026961  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:11.027450  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:11.027491  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:11.027390  389957 retry.go:31] will retry after 545.789907ms: waiting for machine to come up
	I1002 12:14:11.575193  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:11.575631  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:11.575657  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:11.575578  389957 retry.go:31] will retry after 644.459981ms: waiting for machine to come up
	I1002 12:14:12.221616  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:12.222127  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:12.222154  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:12.222069  389957 retry.go:31] will retry after 1.074468524s: waiting for machine to come up
	I1002 12:14:13.297669  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:13.298220  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:13.298252  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:13.298169  389957 retry.go:31] will retry after 1.126830159s: waiting for machine to come up
	I1002 12:14:14.427021  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:14.427503  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:14.427540  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:14.427439  389957 retry.go:31] will retry after 1.637152644s: waiting for machine to come up
	I1002 12:14:16.067245  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:16.067676  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:16.067712  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:16.067646  389957 retry.go:31] will retry after 1.618895619s: waiting for machine to come up
	I1002 12:14:17.688337  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:17.688833  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:17.688869  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:17.688772  389957 retry.go:31] will retry after 2.311429982s: waiting for machine to come up
	I1002 12:14:20.002096  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:20.002771  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:20.002805  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:20.002730  389957 retry.go:31] will retry after 3.242475322s: waiting for machine to come up
	I1002 12:14:23.246400  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:23.246839  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:23.246868  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:23.246792  389957 retry.go:31] will retry after 4.373869377s: waiting for machine to come up
	I1002 12:14:27.622371  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:27.622985  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:27.623042  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:27.622943  389957 retry.go:31] will retry after 4.726197421s: waiting for machine to come up
	I1002 12:14:32.351292  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.351734  389933 main.go:141] libmachine: (newest-cni-929075) Found IP for machine: 192.168.83.146
	I1002 12:14:32.351770  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has current primary IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.351781  389933 main.go:141] libmachine: (newest-cni-929075) Reserving static IP address...
	I1002 12:14:32.352148  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find host DHCP lease matching {name: "newest-cni-929075", mac: "52:54:00:2d:e3:39", ip: "192.168.83.146"} in network mk-newest-cni-929075
	I1002 12:14:32.429329  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Getting to WaitForSSH function...
	I1002 12:14:32.429357  389933 main.go:141] libmachine: (newest-cni-929075) Reserved static IP address: 192.168.83.146
	I1002 12:14:32.429373  389933 main.go:141] libmachine: (newest-cni-929075) Waiting for SSH to be available...
	I1002 12:14:32.432356  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.432861  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.432901  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.433057  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using SSH client type: external
	I1002 12:14:32.433089  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa (-rw-------)
	I1002 12:14:32.433150  389933 main.go:141] libmachine: (newest-cni-929075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 12:14:32.433179  389933 main.go:141] libmachine: (newest-cni-929075) DBG | About to run SSH command:
	I1002 12:14:32.433206  389933 main.go:141] libmachine: (newest-cni-929075) DBG | exit 0
	I1002 12:14:32.526186  389933 main.go:141] libmachine: (newest-cni-929075) DBG | SSH cmd err, output: <nil>: 
	I1002 12:14:32.526421  389933 main.go:141] libmachine: (newest-cni-929075) KVM machine creation complete!
	I1002 12:14:32.526819  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:32.527357  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:32.527585  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:32.527746  389933 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 12:14:32.527764  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetState
	I1002 12:14:32.529097  389933 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 12:14:32.529118  389933 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 12:14:32.529128  389933 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 12:14:32.529138  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.531641  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.531950  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.531984  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.532118  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.532305  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.532498  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.532666  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.532831  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.533180  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.533200  389933 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 12:14:32.653780  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:14:32.653816  389933 main.go:141] libmachine: Detecting the provisioner...
	I1002 12:14:32.653829  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.656654  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.657053  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.657086  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.657253  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.657465  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.657656  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.657866  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.658065  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.658507  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.658530  389933 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 12:14:32.783252  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 12:14:32.783327  389933 main.go:141] libmachine: found compatible host: buildroot
	I1002 12:14:32.783336  389933 main.go:141] libmachine: Provisioning with buildroot...
	I1002 12:14:32.783350  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:32.783628  389933 buildroot.go:166] provisioning hostname "newest-cni-929075"
	I1002 12:14:32.783657  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:32.783858  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.787335  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.787756  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.787789  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.788024  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.788236  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.788420  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.788576  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.788769  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.789109  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.789125  389933 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-929075 && echo "newest-cni-929075" | sudo tee /etc/hostname
	I1002 12:14:32.924671  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-929075
	
	I1002 12:14:32.924708  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.927594  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.927932  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.927968  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.928216  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.928437  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.928651  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.928807  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.929000  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.929377  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.929398  389933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-929075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-929075/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-929075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:14:33.062885  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:14:33.062921  389933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 12:14:33.062952  389933 buildroot.go:174] setting up certificates
	I1002 12:14:33.062973  389933 provision.go:83] configureAuth start
	I1002 12:14:33.062994  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:33.063331  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.066125  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.066591  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.066634  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.066742  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.069452  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.069839  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.069873  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.069929  389933 provision.go:138] copyHostCerts
	I1002 12:14:33.070005  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 12:14:33.070020  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 12:14:33.070084  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 12:14:33.070206  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 12:14:33.070219  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 12:14:33.070262  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 12:14:33.070350  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 12:14:33.070380  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 12:14:33.070417  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 12:14:33.070485  389933 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-929075 san=[192.168.83.146 192.168.83.146 localhost 127.0.0.1 minikube newest-cni-929075]
	I1002 12:14:33.193013  389933 provision.go:172] copyRemoteCerts
	I1002 12:14:33.193090  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:14:33.193128  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.195744  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.196137  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.196180  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.196373  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.196580  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.196862  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.197069  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.288384  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:14:33.312055  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 12:14:33.335846  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:14:33.359249  389933 provision.go:86] duration metric: configureAuth took 296.256123ms
	I1002 12:14:33.359279  389933 buildroot.go:189] setting minikube options for container-runtime
	I1002 12:14:33.359516  389933 config.go:182] Loaded profile config "newest-cni-929075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:33.359611  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.362182  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.362575  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.362615  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.362794  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.363031  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.363246  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.363412  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.363619  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:33.363925  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:33.363947  389933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:14:33.684236  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:14:33.684276  389933 main.go:141] libmachine: Checking connection to Docker...
	I1002 12:14:33.684290  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetURL
	I1002 12:14:33.685730  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using libvirt version 6000000
	I1002 12:14:33.688673  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.689094  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.689141  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.689309  389933 main.go:141] libmachine: Docker is up and running!
	I1002 12:14:33.689326  389933 main.go:141] libmachine: Reticulating splines...
	I1002 12:14:33.689332  389933 client.go:171] LocalClient.Create took 26.154339132s
	I1002 12:14:33.689369  389933 start.go:167] duration metric: libmachine.API.Create for "newest-cni-929075" took 26.154421654s
	I1002 12:14:33.689383  389933 start.go:300] post-start starting for "newest-cni-929075" (driver="kvm2")
	I1002 12:14:33.689398  389933 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:14:33.689422  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.689747  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:14:33.689782  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.692902  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.692957  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.692985  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.693039  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.693255  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.693432  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.693605  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.788382  389933 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:14:33.792840  389933 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 12:14:33.792865  389933 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 12:14:33.792925  389933 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 12:14:33.792991  389933 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 12:14:33.793070  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:14:33.801831  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 12:14:33.825672  389933 start.go:303] post-start completed in 136.274816ms
	I1002 12:14:33.825726  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:33.826385  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.829277  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.829664  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.829698  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.829952  389933 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json ...
	I1002 12:14:33.830118  389933 start.go:128] duration metric: createHost completed in 26.313486601s
	I1002 12:14:33.830148  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.832409  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.832775  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.832813  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.832945  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.833148  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.833329  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.833497  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.833690  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:33.834043  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:33.834056  389933 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 12:14:33.959219  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696248873.941376365
	
	I1002 12:14:33.959248  389933 fix.go:206] guest clock: 1696248873.941376365
	I1002 12:14:33.959258  389933 fix.go:219] Guest: 2023-10-02 12:14:33.941376365 +0000 UTC Remote: 2023-10-02 12:14:33.830134 +0000 UTC m=+26.424868911 (delta=111.242365ms)
	I1002 12:14:33.959285  389933 fix.go:190] guest clock delta is within tolerance: 111.242365ms
	I1002 12:14:33.959291  389933 start.go:83] releasing machines lock for "newest-cni-929075", held for 26.442744791s
	I1002 12:14:33.959322  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.959646  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.962608  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.963052  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.963082  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.963218  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.963739  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.963958  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.964060  389933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:14:33.964107  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.964205  389933 ssh_runner.go:195] Run: cat /version.json
	I1002 12:14:33.964225  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.967076  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967196  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967440  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.967470  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967551  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.967576  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967599  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.967792  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.967816  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.967937  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.968018  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.968116  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.968165  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.968241  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:34.077174  389933 ssh_runner.go:195] Run: systemctl --version
	I1002 12:14:34.084042  389933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:14:34.243499  389933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 12:14:34.251193  389933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 12:14:34.251267  389933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:14:34.266986  389933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 12:14:34.267015  389933 start.go:469] detecting cgroup driver to use...
	I1002 12:14:34.267074  389933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:14:34.283318  389933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:14:34.297301  389933 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:14:34.297394  389933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:14:34.311917  389933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:14:34.325966  389933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:14:34.433208  389933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:14:34.565807  389933 docker.go:213] disabling docker service ...
	I1002 12:14:34.565880  389933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:14:34.580322  389933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:14:34.592071  389933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:14:34.711661  389933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:14:34.845885  389933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:14:34.861163  389933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:14:34.881392  389933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:14:34.881463  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.893436  389933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:14:34.893510  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.906253  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.916847  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.928534  389933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:14:34.940522  389933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:14:34.950147  389933 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 12:14:34.950220  389933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 12:14:34.964486  389933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:14:34.974969  389933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:14:35.100343  389933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:14:35.279939  389933 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:14:35.280025  389933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:14:35.285402  389933 start.go:537] Will wait 60s for crictl version
	I1002 12:14:35.285462  389933 ssh_runner.go:195] Run: which crictl
	I1002 12:14:35.289356  389933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:14:35.331066  389933 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 12:14:35.331176  389933 ssh_runner.go:195] Run: crio --version
	I1002 12:14:35.381989  389933 ssh_runner.go:195] Run: crio --version
	I1002 12:14:35.431200  389933 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 12:14:35.432478  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:35.435272  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:35.435647  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:35.435681  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:35.435851  389933 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 12:14:35.439940  389933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:14:35.452090  389933 localpath.go:92] copying /home/jenkins/minikube-integration/17340-332611/.minikube/client.crt -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.crt
	I1002 12:14:35.452242  389933 localpath.go:117] copying /home/jenkins/minikube-integration/17340-332611/.minikube/client.key -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.key
	I1002 12:14:35.454108  389933 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 12:14:35.455582  389933 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:14:35.455639  389933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:14:35.490623  389933 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 12:14:35.490699  389933 ssh_runner.go:195] Run: which lz4
	I1002 12:14:35.495248  389933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 12:14:35.499492  389933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 12:14:35.499526  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 12:14:37.296484  389933 crio.go:444] Took 1.801260 seconds to copy over tarball
	I1002 12:14:37.296579  389933 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 12:14:40.254011  389933 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.957400802s)
	I1002 12:14:40.254044  389933 crio.go:451] Took 2.957527 seconds to extract the tarball
	I1002 12:14:40.254055  389933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 12:14:40.300106  389933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:14:40.366315  389933 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:14:40.366342  389933 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:14:40.366432  389933 ssh_runner.go:195] Run: crio config
	I1002 12:14:40.436116  389933 cni.go:84] Creating CNI manager for ""
	I1002 12:14:40.436145  389933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:14:40.436170  389933 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1002 12:14:40.436203  389933 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.83.146 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-929075 NodeName:newest-cni-929075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.83.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:14:40.436429  389933 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-929075"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:14:40.436566  389933 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-929075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:14:40.436641  389933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:14:40.447026  389933 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:14:40.447110  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:14:40.456913  389933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I1002 12:14:40.474810  389933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:14:40.492305  389933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1002 12:14:40.509976  389933 ssh_runner.go:195] Run: grep 192.168.83.146	control-plane.minikube.internal$ /etc/hosts
	I1002 12:14:40.514103  389933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:14:40.528578  389933 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075 for IP: 192.168.83.146
	I1002 12:14:40.528617  389933 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.528819  389933 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 12:14:40.528888  389933 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 12:14:40.528990  389933 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.key
	I1002 12:14:40.529019  389933 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825
	I1002 12:14:40.529036  389933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 with IP's: [192.168.83.146 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 12:14:40.747346  389933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 ...
	I1002 12:14:40.747381  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825: {Name:mkbcbdde62ae8d8d5a9965d1ae02a1ea9e3c5119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.747594  389933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825 ...
	I1002 12:14:40.747618  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825: {Name:mk8fb7f1ceba877582e11948a715ef084120a6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.747726  389933 certs.go:337] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt
	I1002 12:14:40.747791  389933 certs.go:341] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key
	I1002 12:14:40.747842  389933 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key
	I1002 12:14:40.747860  389933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt with IP's: []
	I1002 12:14:40.828193  389933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt ...
	I1002 12:14:40.828225  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt: {Name:mk4e165e331add53ef25eb446ee6b7812b9e34fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.828412  389933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key ...
	I1002 12:14:40.828428  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key: {Name:mk48dc53f3385190d1953597ac7942ece1f65bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.828679  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 12:14:40.828727  389933 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 12:14:40.828747  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:14:40.828778  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:14:40.828811  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:14:40.828848  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 12:14:40.828908  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 12:14:40.829567  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:14:40.856999  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 12:14:40.883209  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:14:40.911694  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 12:14:40.937500  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:14:40.962681  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 12:14:40.987988  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:14:41.011847  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 12:14:41.036472  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 12:14:41.061487  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:14:41.086651  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 12:14:41.112978  389933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:14:41.129585  389933 ssh_runner.go:195] Run: openssl version
	I1002 12:14:41.135636  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 12:14:41.145998  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.150869  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.150931  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.157093  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:14:41.168134  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:14:41.178716  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.183804  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.183869  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.190158  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:14:41.201213  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 12:14:41.212400  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.218016  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.218087  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.225245  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 12:14:41.236391  389933 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:14:41.240827  389933 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:14:41.240877  389933 kubeadm.go:404] StartCluster: {Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.146 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:14:41.240958  389933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:14:41.241007  389933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:14:41.287386  389933 cri.go:89] found id: ""
	I1002 12:14:41.287479  389933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 12:14:41.297226  389933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:14:41.306856  389933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:14:41.316275  389933 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:14:41.316336  389933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:14:41.442096  389933 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:14:41.442378  389933 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:14:41.725465  389933 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:14:41.725598  389933 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:14:41.725775  389933 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:14:41.979444  389933 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:14:42.075418  389933 out.go:204]   - Generating certificates and keys ...
	I1002 12:14:42.075642  389933 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:14:42.075735  389933 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:14:42.103391  389933 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 12:14:42.297354  389933 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 12:14:42.392550  389933 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 12:14:42.708570  389933 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 12:14:42.934754  389933 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 12:14:42.934944  389933 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-929075] and IPs [192.168.83.146 127.0.0.1 ::1]
	I1002 12:14:42.982696  389933 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 12:14:42.982921  389933 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-929075] and IPs [192.168.83.146 127.0.0.1 ::1]
	I1002 12:14:43.034824  389933 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 12:14:43.311737  389933 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 12:14:43.499321  389933 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 12:14:43.499727  389933 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:14:43.647999  389933 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:14:43.894955  389933 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:14:44.088222  389933 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:14:44.221464  389933 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:14:44.222093  389933 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:14:44.225803  389933 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:14:44.227331  389933 out.go:204]   - Booting up control plane ...
	I1002 12:14:44.227457  389933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:14:44.229445  389933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:14:44.230479  389933 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:14:44.247082  389933 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:14:44.247197  389933 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:14:44.247249  389933 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:14:44.391354  389933 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:14:52.394764  389933 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004797 seconds
	I1002 12:14:52.394944  389933 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:14:52.410894  389933 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:14:52.949155  389933 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:14:52.949438  389933 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-929075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:14:53.469419  389933 kubeadm.go:322] [bootstrap-token] Using token: g1bnlt.lj0v6dxxo5dyh1y9
	I1002 12:14:53.470959  389933 out.go:204]   - Configuring RBAC rules ...
	I1002 12:14:53.471122  389933 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:14:53.482868  389933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:14:53.497577  389933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:14:53.504229  389933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:14:53.510992  389933 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:14:53.515877  389933 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:14:53.534419  389933 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:14:53.814973  389933 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:14:53.889297  389933 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:14:53.890289  389933 kubeadm.go:322] 
	I1002 12:14:53.890399  389933 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:14:53.890412  389933 kubeadm.go:322] 
	I1002 12:14:53.890503  389933 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:14:53.890523  389933 kubeadm.go:322] 
	I1002 12:14:53.890556  389933 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:14:53.890628  389933 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:14:53.890700  389933 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:14:53.890712  389933 kubeadm.go:322] 
	I1002 12:14:53.890796  389933 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:14:53.890830  389933 kubeadm.go:322] 
	I1002 12:14:53.890905  389933 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:14:53.890919  389933 kubeadm.go:322] 
	I1002 12:14:53.891003  389933 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:14:53.891111  389933 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:14:53.891209  389933 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:14:53.891218  389933 kubeadm.go:322] 
	I1002 12:14:53.891331  389933 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:14:53.891410  389933 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:14:53.891417  389933 kubeadm.go:322] 
	I1002 12:14:53.891480  389933 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g1bnlt.lj0v6dxxo5dyh1y9 \
	I1002 12:14:53.891633  389933 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:14:53.891664  389933 kubeadm.go:322] 	--control-plane 
	I1002 12:14:53.891670  389933 kubeadm.go:322] 
	I1002 12:14:53.891799  389933 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:14:53.891816  389933 kubeadm.go:322] 
	I1002 12:14:53.891922  389933 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g1bnlt.lj0v6dxxo5dyh1y9 \
	I1002 12:14:53.892087  389933 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:14:53.892629  389933 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:14:53.892654  389933 cni.go:84] Creating CNI manager for ""
	I1002 12:14:53.892661  389933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:14:53.894678  389933 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:14:53.896323  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:14:53.933026  389933 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:14:53.968693  389933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:14:53.968838  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:53.968875  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=newest-cni-929075 minikube.k8s.io/updated_at=2023_10_02T12_14_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:54.038729  389933 ops.go:34] apiserver oom_adj: -16
	I1002 12:14:54.317442  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:54.430731  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:55.027896  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:55.528373  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:56.027818  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:56.527772  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:57.028408  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:57.528177  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:58.027752  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:58.528636  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:59.028500  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:14:59.528452  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:00.028712  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:00.527771  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:01.028326  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:01.527856  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:02.027970  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:02.528307  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:03.027768  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:03.528684  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:04.027928  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:04.528365  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:05.027892  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:05.528605  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:06.028369  389933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:15:06.161272  389933 kubeadm.go:1081] duration metric: took 12.192506904s to wait for elevateKubeSystemPrivileges.
	I1002 12:15:06.161317  389933 kubeadm.go:406] StartCluster complete in 24.920443787s
	I1002 12:15:06.161345  389933 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:15:06.161445  389933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:15:06.163662  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:15:06.164045  389933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:15:06.164071  389933 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:15:06.164163  389933 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-929075"
	I1002 12:15:06.164181  389933 addons.go:69] Setting default-storageclass=true in profile "newest-cni-929075"
	I1002 12:15:06.164190  389933 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-929075"
	I1002 12:15:06.164257  389933 config.go:182] Loaded profile config "newest-cni-929075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:15:06.164256  389933 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-929075"
	I1002 12:15:06.164276  389933 host.go:66] Checking if "newest-cni-929075" exists ...
	I1002 12:15:06.164752  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:06.164788  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:06.164791  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:06.164815  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:06.181275  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37651
	I1002 12:15:06.181486  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42553
	I1002 12:15:06.181743  389933 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:06.181847  389933 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:06.182261  389933 main.go:141] libmachine: Using API Version  1
	I1002 12:15:06.182292  389933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:06.182449  389933 main.go:141] libmachine: Using API Version  1
	I1002 12:15:06.182482  389933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:06.182689  389933 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:06.182794  389933 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:06.182976  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetState
	I1002 12:15:06.183598  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:06.183636  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:06.189313  389933 addons.go:231] Setting addon default-storageclass=true in "newest-cni-929075"
	I1002 12:15:06.189363  389933 host.go:66] Checking if "newest-cni-929075" exists ...
	I1002 12:15:06.189925  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:15:06.189992  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:15:06.200100  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1002 12:15:06.200622  389933 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:15:06.201206  389933 main.go:141] libmachine: Using API Version  1
	I1002 12:15:06.201235  389933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:15:06.201674  389933 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:15:06.201876  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetState
	I1002 12:15:06.204315  389933 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-929075" context rescaled to 1 replicas
	I1002 12:15:06.204335  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:15:06.204355  389933 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.146 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:15:06.206334  389933 out.go:177] * Verifying Kubernetes components...
	I1002 12:15:06.207342  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I1002 12:15:06.208063  389933 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:11 UTC, ends at Mon 2023-10-02 12:15:07 UTC. --
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.400138210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58273410-d69e-4209-81a7-f29893576e4d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.400294883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58273410-d69e-4209-81a7-f29893576e4d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.424814241Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=090d4e40-fc3a-4fe9-bde7-374aaac1144f name=/runtime.v1.RuntimeService/Status
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.424943761Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=090d4e40-fc3a-4fe9-bde7-374aaac1144f name=/runtime.v1.RuntimeService/Status
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.451812012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cef7ec0a-7fda-4407-b73b-089952a4f136 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.451892163Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cef7ec0a-7fda-4407-b73b-089952a4f136 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.454149442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1722ad54-d712-4da1-822a-07534df79ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.455153490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248907455138515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1722ad54-d712-4da1-822a-07534df79ec6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.455823217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=140b910f-3d16-4146-aa61-092f198bd3c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.455899859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=140b910f-3d16-4146-aa61-092f198bd3c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.456074141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=140b910f-3d16-4146-aa61-092f198bd3c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.498564420Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5f470839-fbe6-43cc-bb18-6405efa3fc88 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.498675448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5f470839-fbe6-43cc-bb18-6405efa3fc88 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.500091099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8b36254c-7e57-4663-b338-f63d4191b142 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.500585192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248907500571604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8b36254c-7e57-4663-b338-f63d4191b142 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.501133078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d0020dc3-ed97-4e2e-956d-697d13e871ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.501199123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d0020dc3-ed97-4e2e-956d-697d13e871ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.501355229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d0020dc3-ed97-4e2e-956d-697d13e871ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.545858944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=df469b62-8703-4c1a-a007-26747cabdec0 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.545943729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=df469b62-8703-4c1a-a007-26747cabdec0 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.547908392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=89b7361f-9485-4675-874e-8997836dc6a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.552250393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248907551919009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=89b7361f-9485-4675-874e-8997836dc6a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.553375260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5122980f-3691-4746-9b9c-e8a7ef1df7fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.553607039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5122980f-3691-4746-9b9c-e8a7ef1df7fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:15:07 embed-certs-487027 crio[717]: time="2023-10-02 12:15:07.554033523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c,PodSandboxId:3bad86a275e9cd14afe9c6c4e389426e6d8e1e69557e615793a528a4e9782aef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247982512745551,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b21176-98f2-4fb6-98ea-1435def0edd9,},Annotations:map[string]string{io.kubernetes.container.hash: ff073123,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52,PodSandboxId:bb7a2ea6859d7f28f555b7ab7f9ff59da183e05912d99b254b10f92d553f85b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696247981927410404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qbmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54a61868-45fc-40cd-8887-0609835639c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f0caeef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325,PodSandboxId:989fa6adc06d4b8a1c1ccd570a23022e215dd518a6e7dd680ec80afbe2d24237,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696247980189250265,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6g7f7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 37b0eff0-06cb-4b57-b679-970c738d0485,},Annotations:map[string]string{io.kubernetes.container.hash: 6b43ac23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096,PodSandboxId:acd19a00c9de36be1d3e2cdbd5b9c5515d39171714985f05af656c203308c16c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696247958744683350,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a1777afbb0c7e9fc3e84050349e0a2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899,PodSandboxId:35b65f1b622469307ded6fbfe569eee4e029aed5c8dfb2fadb4c2b231a1e934b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696247958477600985,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4f70d092091e94eb9a4455eabeed2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 4eb25fd6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99,PodSandboxId:7727bf964c0c6c13bb7861839e221c1912d9e3265f0b7d21bff22b0a0ba64894,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696247958527992057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205018018dd24b6c78ddf
e11802a8562,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369,PodSandboxId:959099305ec1e5b280289c321bb98a4f8e8bdb120b411398d70e923716001feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696247958499957520,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-487027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123e939d6b6526ca88a48a63ac0ec49
a,},Annotations:map[string]string{io.kubernetes.container.hash: d684db14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5122980f-3691-4746-9b9c-e8a7ef1df7fb name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	05b91b88f2551       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   3bad86a275e9c       storage-provisioner
	9e6fa9cb90f98       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   bb7a2ea6859d7       coredns-5dd5756b68-qbmwd
	3b6fc3c46243c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   15 minutes ago      Running             kube-proxy                0                   989fa6adc06d4       kube-proxy-6g7f7
	ef0f3434fa2a0       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   15 minutes ago      Running             kube-scheduler            2                   acd19a00c9de3       kube-scheduler-embed-certs-487027
	0f90e4456cc7b       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   15 minutes ago      Running             kube-controller-manager   2                   7727bf964c0c6       kube-controller-manager-embed-certs-487027
	07c200729350d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   15 minutes ago      Running             kube-apiserver            2                   959099305ec1e       kube-apiserver-embed-certs-487027
	0ded080ee3bc0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   35b65f1b62246       etcd-embed-certs-487027
	
	* 
	* ==> coredns [9e6fa9cb90f98e3b1e49aab992bcd3c9b6b2fd3af9507ee27854642a2ded6b52] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49947 - 38866 "HINFO IN 5084756394073907370.16678952905601175. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014470802s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-487027
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-487027
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=embed-certs-487027
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:59:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-487027
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:15:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:15:05 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:15:05 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:15:05 +0000   Mon, 02 Oct 2023 11:59:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:15:05 +0000   Mon, 02 Oct 2023 11:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.147
	  Hostname:    embed-certs-487027
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba4633ef464748b085c4a648df6d3a93
	  System UUID:                ba4633ef-4647-48b0-85c4-a648df6d3a93
	  Boot ID:                    b0f85ef0-dda5-4a13-9c3e-f60b885e2968
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qbmwd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-487027                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-487027             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-487027    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-6g7f7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-487027             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-hbb5d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-487027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-487027 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-487027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-487027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-487027 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-487027 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-487027 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-487027 event: Registered Node embed-certs-487027 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.573132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.434180] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138191] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.387286] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.116105] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.101054] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.144939] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.111741] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.260744] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +18.033528] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[ +21.044818] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 2 11:59] systemd-fstab-generator[3490]: Ignoring "noauto" for root device
	[  +9.807154] systemd-fstab-generator[3813]: Ignoring "noauto" for root device
	[ +14.491002] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0ded080ee3bc0b7d8cb6d9a097e51a2fd757f3febbacdf6caedfaf5def926899] <==
	* {"level":"info","ts":"2023-10-02T11:59:20.671715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T11:59:20.671781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 received MsgPreVoteResp from 56ba37728bcb2347 at term 1"}
	{"level":"info","ts":"2023-10-02T11:59:20.67183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 received MsgVoteResp from 56ba37728bcb2347 at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56ba37728bcb2347 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.671929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56ba37728bcb2347 elected leader 56ba37728bcb2347 at term 2"}
	{"level":"info","ts":"2023-10-02T11:59:20.676857Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"56ba37728bcb2347","local-member-attributes":"{Name:embed-certs-487027 ClientURLs:[https://192.168.72.147:2379]}","request-path":"/0/members/56ba37728bcb2347/attributes","cluster-id":"63cf2dc9c47dd9a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T11:59:20.678183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:59:20.679297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.147:2379"}
	{"level":"info","ts":"2023-10-02T11:59:20.679376Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.679576Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T11:59:20.685395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T11:59:20.687845Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63cf2dc9c47dd9a","local-member-id":"56ba37728bcb2347","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.687976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.688014Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T11:59:20.688227Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T11:59:20.688247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T12:09:20.770219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2023-10-02T12:09:20.775926Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":720,"took":"4.705851ms","hash":2470048670}
	{"level":"info","ts":"2023-10-02T12:09:20.776035Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2470048670,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2023-10-02T12:14:20.78156Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2023-10-02T12:14:20.784241Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":963,"took":"2.19254ms","hash":1968367954}
	{"level":"info","ts":"2023-10-02T12:14:20.784288Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1968367954,"revision":963,"compact-revision":720}
	{"level":"warn","ts":"2023-10-02T12:14:40.387777Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.874213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-02T12:14:40.388011Z","caller":"traceutil/trace.go:171","msg":"trace[975206203] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1222; }","duration":"130.265548ms","start":"2023-10-02T12:14:40.257705Z","end":"2023-10-02T12:14:40.387971Z","steps":["trace[975206203] 'count revisions from in-memory index tree'  (duration: 129.759663ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:15:07 up 21 min,  0 users,  load average: 0.39, 0.23, 0.20
	Linux embed-certs-487027 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [07c200729350d4f7fbece35ab983872da64699c57ae45d59a61d76470932d369] <==
	* E1002 12:10:23.850745       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:10:23.850782       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:11:22.687891       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:12:22.688305       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:12:23.850810       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:12:23.850994       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:12:23.851034       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:12:23.851130       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:12:23.851235       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:12:23.852932       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:13:22.687956       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:14:22.688004       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:14:22.852904       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:14:22.853095       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:14:22.853584       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:14:23.853578       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:14:23.853648       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:14:23.853657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:14:23.853770       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:14:23.853903       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:14:23.855140       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0f90e4456cc7bb392b5155e5a0cc316a61313d95e28d42b006f9d702bcc2ab99] <==
	* E1002 12:09:37.945186       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:09:38.442312       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:07.952050       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:08.453831       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:37.958621       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:38.463747       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:10:49.940357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="209.81µs"
	I1002 12:11:00.940353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="82.399µs"
	E1002 12:11:07.963946       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:08.473285       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:11:37.969624       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:38.482364       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:12:07.975554       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:08.491317       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:12:37.986915       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:38.500201       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:07.992990       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:08.509209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:38.001193       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:38.518701       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:08.008779       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:08.530023       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:38.018570       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:38.540928       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:15:08.024531       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	* 
	* ==> kube-proxy [3b6fc3c46243cd934fa6df76a09de19a56b92faba025437f84b6e7f76943c325] <==
	* I1002 11:59:40.672519       1 server_others.go:69] "Using iptables proxy"
	I1002 11:59:40.735143       1 node.go:141] Successfully retrieved node IP: 192.168.72.147
	I1002 11:59:41.400126       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 11:59:41.400220       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 11:59:41.650617       1 server_others.go:152] "Using iptables Proxier"
	I1002 11:59:41.727575       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 11:59:41.837904       1 server.go:846] "Version info" version="v1.28.2"
	I1002 11:59:41.838608       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 11:59:41.839397       1 config.go:315] "Starting node config controller"
	I1002 11:59:41.839489       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 11:59:41.844430       1 config.go:188] "Starting service config controller"
	I1002 11:59:41.844692       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 11:59:41.844932       1 config.go:97] "Starting endpoint slice config controller"
	I1002 11:59:41.844960       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 11:59:41.940404       1 shared_informer.go:318] Caches are synced for node config
	I1002 11:59:41.946866       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 11:59:41.947369       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ef0f3434fa2a074d7bb8bb02382b0874dffc1b727ae6447a14450033f3d2c096] <==
	* W1002 11:59:23.737260       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 11:59:23.737413       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 11:59:23.753052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:23.753218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:23.780048       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:23.780137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:23.809269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:23.809364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 11:59:23.864158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:23.864274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 11:59:23.894101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:23.894167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 11:59:23.904849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:23.904905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 11:59:24.037416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:24.037644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 11:59:24.061581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:24.061725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 11:59:24.120600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:24.120717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 11:59:24.150159       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 11:59:24.150397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 11:59:24.231051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:24.231286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1002 11:59:25.862728       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:11 UTC, ends at Mon 2023-10-02 12:15:08 UTC. --
	Oct 02 12:12:27 embed-certs-487027 kubelet[3820]: E1002 12:12:27.023042    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:12:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:12:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:12:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:12:33 embed-certs-487027 kubelet[3820]: E1002 12:12:33.923393    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:12:46 embed-certs-487027 kubelet[3820]: E1002 12:12:46.924268    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:13:01 embed-certs-487027 kubelet[3820]: E1002 12:13:01.922865    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:13:15 embed-certs-487027 kubelet[3820]: E1002 12:13:15.922763    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:13:27 embed-certs-487027 kubelet[3820]: E1002 12:13:27.024806    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:13:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:13:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:13:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:13:30 embed-certs-487027 kubelet[3820]: E1002 12:13:30.924627    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:13:45 embed-certs-487027 kubelet[3820]: E1002 12:13:45.923732    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:13:59 embed-certs-487027 kubelet[3820]: E1002 12:13:59.923590    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:14:12 embed-certs-487027 kubelet[3820]: E1002 12:14:12.923644    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:14:23 embed-certs-487027 kubelet[3820]: E1002 12:14:23.922887    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:14:27 embed-certs-487027 kubelet[3820]: E1002 12:14:27.023114    3820 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:14:27 embed-certs-487027 kubelet[3820]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:14:27 embed-certs-487027 kubelet[3820]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:14:27 embed-certs-487027 kubelet[3820]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:14:27 embed-certs-487027 kubelet[3820]: E1002 12:14:27.116388    3820 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 02 12:14:36 embed-certs-487027 kubelet[3820]: E1002 12:14:36.924890    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:14:49 embed-certs-487027 kubelet[3820]: E1002 12:14:49.922926    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	Oct 02 12:15:03 embed-certs-487027 kubelet[3820]: E1002 12:15:03.923104    3820 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hbb5d" podUID="2bf56144-ca7b-4688-883e-372101260b52"
	
	* 
	* ==> storage-provisioner [05b91b88f25513c1be064704fc2960704b7aa627c2d664ee54b3d8417cc6667c] <==
	* I1002 11:59:42.623292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:59:42.648995       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:59:42.649117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:59:42.665278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:59:42.665534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46!
	I1002 11:59:42.668384       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d9281a0-87b8-4f66-90c1-c2b68898007d", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46 became leader
	I1002 11:59:42.767555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-487027_a2b9f4b7-959a-4c18-a755-59a062c0fc46!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-487027 -n embed-certs-487027
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-487027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-hbb5d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d: exit status 1 (74.444754ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-hbb5d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-487027 describe pod metrics-server-57f55c9bc5-hbb5d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-304121 -n no-preload-304121
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:14:47.084734129 +0000 UTC m=+5937.656420185
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-304121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-304121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.388µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-304121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-304121 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-304121 logs -n 25: (1.428145049s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC | 02 Oct 23 12:14 UTC |
	| start   | -p newest-cni-929075 --memory=2200 --alsologtostderr   | newest-cni-929075            | jenkins | v1.31.2 | 02 Oct 23 12:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 12:14:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 12:14:07.439143  389933 out.go:296] Setting OutFile to fd 1 ...
	I1002 12:14:07.439473  389933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:14:07.439483  389933 out.go:309] Setting ErrFile to fd 2...
	I1002 12:14:07.439488  389933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 12:14:07.439684  389933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 12:14:07.440279  389933 out.go:303] Setting JSON to false
	I1002 12:14:07.441360  389933 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10594,"bootTime":1696238254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 12:14:07.441422  389933 start.go:138] virtualization: kvm guest
	I1002 12:14:07.444787  389933 out.go:177] * [newest-cni-929075] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 12:14:07.446411  389933 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 12:14:07.446476  389933 notify.go:220] Checking for updates...
	I1002 12:14:07.449580  389933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 12:14:07.450911  389933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:14:07.452194  389933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:07.453414  389933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 12:14:07.454543  389933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 12:14:07.456240  389933 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456362  389933 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456466  389933 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:07.456615  389933 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 12:14:07.493860  389933 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 12:14:07.495173  389933 start.go:298] selected driver: kvm2
	I1002 12:14:07.495188  389933 start.go:902] validating driver "kvm2" against <nil>
	I1002 12:14:07.495200  389933 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 12:14:07.495927  389933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:14:07.496008  389933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 12:14:07.511711  389933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 12:14:07.511760  389933 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W1002 12:14:07.511816  389933 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 12:14:07.512018  389933 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 12:14:07.512054  389933 cni.go:84] Creating CNI manager for ""
	I1002 12:14:07.512064  389933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:14:07.512071  389933 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 12:14:07.512081  389933 start_flags.go:321] config:
	{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:14:07.512229  389933 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 12:14:07.514590  389933 out.go:177] * Starting control plane node newest-cni-929075 in cluster newest-cni-929075
	I1002 12:14:07.516088  389933 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:14:07.516136  389933 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 12:14:07.516144  389933 cache.go:57] Caching tarball of preloaded images
	I1002 12:14:07.516233  389933 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 12:14:07.516243  389933 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 12:14:07.516334  389933 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json ...
	I1002 12:14:07.516351  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json: {Name:mk63314271bc9ebe46627fccddb5cde06b2b76f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:07.516509  389933 start.go:365] acquiring machines lock for newest-cni-929075: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 12:14:07.516538  389933 start.go:369] acquired machines lock for "newest-cni-929075" in 15.906µs
	I1002 12:14:07.516554  389933 start.go:93] Provisioning new machine with config: &{Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:14:07.516620  389933 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 12:14:07.518428  389933 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 12:14:07.518569  389933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:14:07.518614  389933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:14:07.532909  389933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
	I1002 12:14:07.533375  389933 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:14:07.533907  389933 main.go:141] libmachine: Using API Version  1
	I1002 12:14:07.533935  389933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:14:07.534400  389933 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:14:07.534626  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:07.534790  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:07.534948  389933 start.go:159] libmachine.API.Create for "newest-cni-929075" (driver="kvm2")
	I1002 12:14:07.534983  389933 client.go:168] LocalClient.Create starting
	I1002 12:14:07.535028  389933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem
	I1002 12:14:07.535070  389933 main.go:141] libmachine: Decoding PEM data...
	I1002 12:14:07.535094  389933 main.go:141] libmachine: Parsing certificate...
	I1002 12:14:07.535164  389933 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem
	I1002 12:14:07.535192  389933 main.go:141] libmachine: Decoding PEM data...
	I1002 12:14:07.535211  389933 main.go:141] libmachine: Parsing certificate...
	I1002 12:14:07.535234  389933 main.go:141] libmachine: Running pre-create checks...
	I1002 12:14:07.535248  389933 main.go:141] libmachine: (newest-cni-929075) Calling .PreCreateCheck
	I1002 12:14:07.535621  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:07.536056  389933 main.go:141] libmachine: Creating machine...
	I1002 12:14:07.536077  389933 main.go:141] libmachine: (newest-cni-929075) Calling .Create
	I1002 12:14:07.536231  389933 main.go:141] libmachine: (newest-cni-929075) Creating KVM machine...
	I1002 12:14:07.537645  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found existing default KVM network
	I1002 12:14:07.539320  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.539156  389957 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:42:a7} reservation:<nil>}
	I1002 12:14:07.540460  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.540376  389957 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:03:3f} reservation:<nil>}
	I1002 12:14:07.541295  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.541188  389957 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:22:23} reservation:<nil>}
	I1002 12:14:07.542522  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.542440  389957 network.go:214] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:69:64:a9} reservation:<nil>}
	I1002 12:14:07.545038  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.544938  389957 network.go:209] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00048a610}
	I1002 12:14:07.553863  389933 main.go:141] libmachine: (newest-cni-929075) DBG | trying to create private KVM network mk-newest-cni-929075 192.168.83.0/24...
	I1002 12:14:07.634907  389933 main.go:141] libmachine: (newest-cni-929075) DBG | private KVM network mk-newest-cni-929075 192.168.83.0/24 created
	I1002 12:14:07.634951  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.634866  389957 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:07.634969  389933 main.go:141] libmachine: (newest-cni-929075) Setting up store path in /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 ...
	I1002 12:14:07.634988  389933 main.go:141] libmachine: (newest-cni-929075) Building disk image from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 12:14:07.635149  389933 main.go:141] libmachine: (newest-cni-929075) Downloading /home/jenkins/minikube-integration/17340-332611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 12:14:07.883096  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:07.882924  389957 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa...
	I1002 12:14:08.199859  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:08.199732  389957 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/newest-cni-929075.rawdisk...
	I1002 12:14:08.199896  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Writing magic tar header
	I1002 12:14:08.199913  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Writing SSH key tar header
	I1002 12:14:08.199927  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:08.199875  389957 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 ...
	I1002 12:14:08.200086  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075
	I1002 12:14:08.200108  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube/machines
	I1002 12:14:08.200132  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075 (perms=drwx------)
	I1002 12:14:08.200153  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube/machines (perms=drwxr-xr-x)
	I1002 12:14:08.200175  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611/.minikube (perms=drwxr-xr-x)
	I1002 12:14:08.200193  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration/17340-332611 (perms=drwxrwxr-x)
	I1002 12:14:08.200217  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 12:14:08.200233  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 12:14:08.200251  389933 main.go:141] libmachine: (newest-cni-929075) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 12:14:08.200265  389933 main.go:141] libmachine: (newest-cni-929075) Creating domain...
	I1002 12:14:08.200299  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17340-332611
	I1002 12:14:08.200331  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 12:14:08.200368  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home/jenkins
	I1002 12:14:08.200386  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Checking permissions on dir: /home
	I1002 12:14:08.200397  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Skipping /home - not owner
	I1002 12:14:08.201613  389933 main.go:141] libmachine: (newest-cni-929075) define libvirt domain using xml: 
	I1002 12:14:08.201636  389933 main.go:141] libmachine: (newest-cni-929075) <domain type='kvm'>
	I1002 12:14:08.201648  389933 main.go:141] libmachine: (newest-cni-929075)   <name>newest-cni-929075</name>
	I1002 12:14:08.201666  389933 main.go:141] libmachine: (newest-cni-929075)   <memory unit='MiB'>2200</memory>
	I1002 12:14:08.201681  389933 main.go:141] libmachine: (newest-cni-929075)   <vcpu>2</vcpu>
	I1002 12:14:08.201690  389933 main.go:141] libmachine: (newest-cni-929075)   <features>
	I1002 12:14:08.201704  389933 main.go:141] libmachine: (newest-cni-929075)     <acpi/>
	I1002 12:14:08.201717  389933 main.go:141] libmachine: (newest-cni-929075)     <apic/>
	I1002 12:14:08.201731  389933 main.go:141] libmachine: (newest-cni-929075)     <pae/>
	I1002 12:14:08.201743  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.201758  389933 main.go:141] libmachine: (newest-cni-929075)   </features>
	I1002 12:14:08.201774  389933 main.go:141] libmachine: (newest-cni-929075)   <cpu mode='host-passthrough'>
	I1002 12:14:08.201788  389933 main.go:141] libmachine: (newest-cni-929075)   
	I1002 12:14:08.201797  389933 main.go:141] libmachine: (newest-cni-929075)   </cpu>
	I1002 12:14:08.201811  389933 main.go:141] libmachine: (newest-cni-929075)   <os>
	I1002 12:14:08.201824  389933 main.go:141] libmachine: (newest-cni-929075)     <type>hvm</type>
	I1002 12:14:08.201839  389933 main.go:141] libmachine: (newest-cni-929075)     <boot dev='cdrom'/>
	I1002 12:14:08.201852  389933 main.go:141] libmachine: (newest-cni-929075)     <boot dev='hd'/>
	I1002 12:14:08.201866  389933 main.go:141] libmachine: (newest-cni-929075)     <bootmenu enable='no'/>
	I1002 12:14:08.201879  389933 main.go:141] libmachine: (newest-cni-929075)   </os>
	I1002 12:14:08.201893  389933 main.go:141] libmachine: (newest-cni-929075)   <devices>
	I1002 12:14:08.201907  389933 main.go:141] libmachine: (newest-cni-929075)     <disk type='file' device='cdrom'>
	I1002 12:14:08.201927  389933 main.go:141] libmachine: (newest-cni-929075)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/boot2docker.iso'/>
	I1002 12:14:08.201941  389933 main.go:141] libmachine: (newest-cni-929075)       <target dev='hdc' bus='scsi'/>
	I1002 12:14:08.201955  389933 main.go:141] libmachine: (newest-cni-929075)       <readonly/>
	I1002 12:14:08.201968  389933 main.go:141] libmachine: (newest-cni-929075)     </disk>
	I1002 12:14:08.201984  389933 main.go:141] libmachine: (newest-cni-929075)     <disk type='file' device='disk'>
	I1002 12:14:08.202000  389933 main.go:141] libmachine: (newest-cni-929075)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 12:14:08.202020  389933 main.go:141] libmachine: (newest-cni-929075)       <source file='/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/newest-cni-929075.rawdisk'/>
	I1002 12:14:08.202032  389933 main.go:141] libmachine: (newest-cni-929075)       <target dev='hda' bus='virtio'/>
	I1002 12:14:08.202042  389933 main.go:141] libmachine: (newest-cni-929075)     </disk>
	I1002 12:14:08.202053  389933 main.go:141] libmachine: (newest-cni-929075)     <interface type='network'>
	I1002 12:14:08.202070  389933 main.go:141] libmachine: (newest-cni-929075)       <source network='mk-newest-cni-929075'/>
	I1002 12:14:08.202085  389933 main.go:141] libmachine: (newest-cni-929075)       <model type='virtio'/>
	I1002 12:14:08.202099  389933 main.go:141] libmachine: (newest-cni-929075)     </interface>
	I1002 12:14:08.202113  389933 main.go:141] libmachine: (newest-cni-929075)     <interface type='network'>
	I1002 12:14:08.202128  389933 main.go:141] libmachine: (newest-cni-929075)       <source network='default'/>
	I1002 12:14:08.202142  389933 main.go:141] libmachine: (newest-cni-929075)       <model type='virtio'/>
	I1002 12:14:08.202156  389933 main.go:141] libmachine: (newest-cni-929075)     </interface>
	I1002 12:14:08.202168  389933 main.go:141] libmachine: (newest-cni-929075)     <serial type='pty'>
	I1002 12:14:08.202183  389933 main.go:141] libmachine: (newest-cni-929075)       <target port='0'/>
	I1002 12:14:08.202195  389933 main.go:141] libmachine: (newest-cni-929075)     </serial>
	I1002 12:14:08.202210  389933 main.go:141] libmachine: (newest-cni-929075)     <console type='pty'>
	I1002 12:14:08.202224  389933 main.go:141] libmachine: (newest-cni-929075)       <target type='serial' port='0'/>
	I1002 12:14:08.202238  389933 main.go:141] libmachine: (newest-cni-929075)     </console>
	I1002 12:14:08.202250  389933 main.go:141] libmachine: (newest-cni-929075)     <rng model='virtio'>
	I1002 12:14:08.202263  389933 main.go:141] libmachine: (newest-cni-929075)       <backend model='random'>/dev/random</backend>
	I1002 12:14:08.202276  389933 main.go:141] libmachine: (newest-cni-929075)     </rng>
	I1002 12:14:08.202289  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.202301  389933 main.go:141] libmachine: (newest-cni-929075)     
	I1002 12:14:08.202316  389933 main.go:141] libmachine: (newest-cni-929075)   </devices>
	I1002 12:14:08.202328  389933 main.go:141] libmachine: (newest-cni-929075) </domain>
	I1002 12:14:08.202343  389933 main.go:141] libmachine: (newest-cni-929075) 
	I1002 12:14:08.210926  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2a:50:5e in network default
	I1002 12:14:08.211545  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring networks are active...
	I1002 12:14:08.211575  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:08.212174  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring network default is active
	I1002 12:14:08.212446  389933 main.go:141] libmachine: (newest-cni-929075) Ensuring network mk-newest-cni-929075 is active
	I1002 12:14:08.212926  389933 main.go:141] libmachine: (newest-cni-929075) Getting domain xml...
	I1002 12:14:08.213605  389933 main.go:141] libmachine: (newest-cni-929075) Creating domain...
	I1002 12:14:09.504333  389933 main.go:141] libmachine: (newest-cni-929075) Waiting to get IP...
	I1002 12:14:09.505102  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:09.505550  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:09.505641  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:09.505564  389957 retry.go:31] will retry after 308.145581ms: waiting for machine to come up
	I1002 12:14:09.815075  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:09.815632  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:09.815663  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:09.815595  389957 retry.go:31] will retry after 328.787137ms: waiting for machine to come up
	I1002 12:14:10.145981  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:10.146494  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:10.146528  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:10.146413  389957 retry.go:31] will retry after 362.041752ms: waiting for machine to come up
	I1002 12:14:10.509644  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:10.510094  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:10.510129  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:10.510038  389957 retry.go:31] will retry after 514.710376ms: waiting for machine to come up
	I1002 12:14:11.026961  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:11.027450  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:11.027491  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:11.027390  389957 retry.go:31] will retry after 545.789907ms: waiting for machine to come up
	I1002 12:14:11.575193  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:11.575631  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:11.575657  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:11.575578  389957 retry.go:31] will retry after 644.459981ms: waiting for machine to come up
	I1002 12:14:12.221616  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:12.222127  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:12.222154  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:12.222069  389957 retry.go:31] will retry after 1.074468524s: waiting for machine to come up
	I1002 12:14:13.297669  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:13.298220  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:13.298252  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:13.298169  389957 retry.go:31] will retry after 1.126830159s: waiting for machine to come up
	I1002 12:14:14.427021  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:14.427503  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:14.427540  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:14.427439  389957 retry.go:31] will retry after 1.637152644s: waiting for machine to come up
	I1002 12:14:16.067245  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:16.067676  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:16.067712  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:16.067646  389957 retry.go:31] will retry after 1.618895619s: waiting for machine to come up
	I1002 12:14:17.688337  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:17.688833  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:17.688869  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:17.688772  389957 retry.go:31] will retry after 2.311429982s: waiting for machine to come up
	I1002 12:14:20.002096  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:20.002771  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:20.002805  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:20.002730  389957 retry.go:31] will retry after 3.242475322s: waiting for machine to come up
	I1002 12:14:23.246400  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:23.246839  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:23.246868  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:23.246792  389957 retry.go:31] will retry after 4.373869377s: waiting for machine to come up
	I1002 12:14:27.622371  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:27.622985  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find current IP address of domain newest-cni-929075 in network mk-newest-cni-929075
	I1002 12:14:27.623042  389933 main.go:141] libmachine: (newest-cni-929075) DBG | I1002 12:14:27.622943  389957 retry.go:31] will retry after 4.726197421s: waiting for machine to come up
	I1002 12:14:32.351292  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.351734  389933 main.go:141] libmachine: (newest-cni-929075) Found IP for machine: 192.168.83.146
	I1002 12:14:32.351770  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has current primary IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.351781  389933 main.go:141] libmachine: (newest-cni-929075) Reserving static IP address...
	I1002 12:14:32.352148  389933 main.go:141] libmachine: (newest-cni-929075) DBG | unable to find host DHCP lease matching {name: "newest-cni-929075", mac: "52:54:00:2d:e3:39", ip: "192.168.83.146"} in network mk-newest-cni-929075
	I1002 12:14:32.429329  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Getting to WaitForSSH function...
	I1002 12:14:32.429357  389933 main.go:141] libmachine: (newest-cni-929075) Reserved static IP address: 192.168.83.146
	I1002 12:14:32.429373  389933 main.go:141] libmachine: (newest-cni-929075) Waiting for SSH to be available...
	I1002 12:14:32.432356  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.432861  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.432901  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.433057  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using SSH client type: external
	I1002 12:14:32.433089  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa (-rw-------)
	I1002 12:14:32.433150  389933 main.go:141] libmachine: (newest-cni-929075) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 12:14:32.433179  389933 main.go:141] libmachine: (newest-cni-929075) DBG | About to run SSH command:
	I1002 12:14:32.433206  389933 main.go:141] libmachine: (newest-cni-929075) DBG | exit 0
	I1002 12:14:32.526186  389933 main.go:141] libmachine: (newest-cni-929075) DBG | SSH cmd err, output: <nil>: 
	I1002 12:14:32.526421  389933 main.go:141] libmachine: (newest-cni-929075) KVM machine creation complete!
	I1002 12:14:32.526819  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:32.527357  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:32.527585  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:32.527746  389933 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 12:14:32.527764  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetState
	I1002 12:14:32.529097  389933 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 12:14:32.529118  389933 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 12:14:32.529128  389933 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 12:14:32.529138  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.531641  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.531950  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.531984  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.532118  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.532305  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.532498  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.532666  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.532831  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.533180  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.533200  389933 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 12:14:32.653780  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:14:32.653816  389933 main.go:141] libmachine: Detecting the provisioner...
	I1002 12:14:32.653829  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.656654  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.657053  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.657086  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.657253  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.657465  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.657656  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.657866  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.658065  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.658507  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.658530  389933 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 12:14:32.783252  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 12:14:32.783327  389933 main.go:141] libmachine: found compatible host: buildroot
	I1002 12:14:32.783336  389933 main.go:141] libmachine: Provisioning with buildroot...
	I1002 12:14:32.783350  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:32.783628  389933 buildroot.go:166] provisioning hostname "newest-cni-929075"
	I1002 12:14:32.783657  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:32.783858  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.787335  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.787756  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.787789  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.788024  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.788236  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.788420  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.788576  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.788769  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.789109  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.789125  389933 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-929075 && echo "newest-cni-929075" | sudo tee /etc/hostname
	I1002 12:14:32.924671  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-929075
	
	I1002 12:14:32.924708  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:32.927594  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.927932  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:32.927968  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:32.928216  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:32.928437  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.928651  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:32.928807  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:32.929000  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:32.929377  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:32.929398  389933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-929075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-929075/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-929075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 12:14:33.062885  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 12:14:33.062921  389933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 12:14:33.062952  389933 buildroot.go:174] setting up certificates
	I1002 12:14:33.062973  389933 provision.go:83] configureAuth start
	I1002 12:14:33.062994  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetMachineName
	I1002 12:14:33.063331  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.066125  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.066591  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.066634  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.066742  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.069452  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.069839  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.069873  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.069929  389933 provision.go:138] copyHostCerts
	I1002 12:14:33.070005  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 12:14:33.070020  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 12:14:33.070084  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 12:14:33.070206  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 12:14:33.070219  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 12:14:33.070262  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 12:14:33.070350  389933 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 12:14:33.070380  389933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 12:14:33.070417  389933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 12:14:33.070485  389933 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.newest-cni-929075 san=[192.168.83.146 192.168.83.146 localhost 127.0.0.1 minikube newest-cni-929075]
	I1002 12:14:33.193013  389933 provision.go:172] copyRemoteCerts
	I1002 12:14:33.193090  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 12:14:33.193128  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.195744  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.196137  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.196180  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.196373  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.196580  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.196862  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.197069  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.288384  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 12:14:33.312055  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 12:14:33.335846  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 12:14:33.359249  389933 provision.go:86] duration metric: configureAuth took 296.256123ms
	I1002 12:14:33.359279  389933 buildroot.go:189] setting minikube options for container-runtime
	I1002 12:14:33.359516  389933 config.go:182] Loaded profile config "newest-cni-929075": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:14:33.359611  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.362182  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.362575  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.362615  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.362794  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.363031  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.363246  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.363412  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.363619  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:33.363925  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:33.363947  389933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 12:14:33.684236  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 12:14:33.684276  389933 main.go:141] libmachine: Checking connection to Docker...
	I1002 12:14:33.684290  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetURL
	I1002 12:14:33.685730  389933 main.go:141] libmachine: (newest-cni-929075) DBG | Using libvirt version 6000000
	I1002 12:14:33.688673  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.689094  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.689141  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.689309  389933 main.go:141] libmachine: Docker is up and running!
	I1002 12:14:33.689326  389933 main.go:141] libmachine: Reticulating splines...
	I1002 12:14:33.689332  389933 client.go:171] LocalClient.Create took 26.154339132s
	I1002 12:14:33.689369  389933 start.go:167] duration metric: libmachine.API.Create for "newest-cni-929075" took 26.154421654s
	I1002 12:14:33.689383  389933 start.go:300] post-start starting for "newest-cni-929075" (driver="kvm2")
	I1002 12:14:33.689398  389933 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 12:14:33.689422  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.689747  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 12:14:33.689782  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.692902  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.692957  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.692985  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.693039  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.693255  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.693432  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.693605  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.788382  389933 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 12:14:33.792840  389933 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 12:14:33.792865  389933 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 12:14:33.792925  389933 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 12:14:33.792991  389933 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 12:14:33.793070  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 12:14:33.801831  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 12:14:33.825672  389933 start.go:303] post-start completed in 136.274816ms
	I1002 12:14:33.825726  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetConfigRaw
	I1002 12:14:33.826385  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.829277  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.829664  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.829698  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.829952  389933 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/config.json ...
	I1002 12:14:33.830118  389933 start.go:128] duration metric: createHost completed in 26.313486601s
	I1002 12:14:33.830148  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.832409  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.832775  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.832813  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.832945  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.833148  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.833329  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.833497  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.833690  389933 main.go:141] libmachine: Using SSH client type: native
	I1002 12:14:33.834043  389933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.146 22 <nil> <nil>}
	I1002 12:14:33.834056  389933 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 12:14:33.959219  389933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696248873.941376365
	
	I1002 12:14:33.959248  389933 fix.go:206] guest clock: 1696248873.941376365
	I1002 12:14:33.959258  389933 fix.go:219] Guest: 2023-10-02 12:14:33.941376365 +0000 UTC Remote: 2023-10-02 12:14:33.830134 +0000 UTC m=+26.424868911 (delta=111.242365ms)
	I1002 12:14:33.959285  389933 fix.go:190] guest clock delta is within tolerance: 111.242365ms
	I1002 12:14:33.959291  389933 start.go:83] releasing machines lock for "newest-cni-929075", held for 26.442744791s
	I1002 12:14:33.959322  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.959646  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:33.962608  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.963052  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.963082  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.963218  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.963739  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.963958  389933 main.go:141] libmachine: (newest-cni-929075) Calling .DriverName
	I1002 12:14:33.964060  389933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 12:14:33.964107  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.964205  389933 ssh_runner.go:195] Run: cat /version.json
	I1002 12:14:33.964225  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHHostname
	I1002 12:14:33.967076  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967196  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967440  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.967470  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967551  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:33.967576  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:33.967599  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.967792  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHPort
	I1002 12:14:33.967816  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.967937  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHKeyPath
	I1002 12:14:33.968018  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.968116  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetSSHUsername
	I1002 12:14:33.968165  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:33.968241  389933 sshutil.go:53] new ssh client: &{IP:192.168.83.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/newest-cni-929075/id_rsa Username:docker}
	I1002 12:14:34.077174  389933 ssh_runner.go:195] Run: systemctl --version
	I1002 12:14:34.084042  389933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 12:14:34.243499  389933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 12:14:34.251193  389933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 12:14:34.251267  389933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 12:14:34.266986  389933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 12:14:34.267015  389933 start.go:469] detecting cgroup driver to use...
	I1002 12:14:34.267074  389933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 12:14:34.283318  389933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 12:14:34.297301  389933 docker.go:197] disabling cri-docker service (if available) ...
	I1002 12:14:34.297394  389933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 12:14:34.311917  389933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 12:14:34.325966  389933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 12:14:34.433208  389933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 12:14:34.565807  389933 docker.go:213] disabling docker service ...
	I1002 12:14:34.565880  389933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 12:14:34.580322  389933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 12:14:34.592071  389933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 12:14:34.711661  389933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 12:14:34.845885  389933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 12:14:34.861163  389933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 12:14:34.881392  389933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 12:14:34.881463  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.893436  389933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 12:14:34.893510  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.906253  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.916847  389933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 12:14:34.928534  389933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 12:14:34.940522  389933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 12:14:34.950147  389933 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 12:14:34.950220  389933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 12:14:34.964486  389933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 12:14:34.974969  389933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 12:14:35.100343  389933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 12:14:35.279939  389933 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 12:14:35.280025  389933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 12:14:35.285402  389933 start.go:537] Will wait 60s for crictl version
	I1002 12:14:35.285462  389933 ssh_runner.go:195] Run: which crictl
	I1002 12:14:35.289356  389933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 12:14:35.331066  389933 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 12:14:35.331176  389933 ssh_runner.go:195] Run: crio --version
	I1002 12:14:35.381989  389933 ssh_runner.go:195] Run: crio --version
	I1002 12:14:35.431200  389933 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 12:14:35.432478  389933 main.go:141] libmachine: (newest-cni-929075) Calling .GetIP
	I1002 12:14:35.435272  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:35.435647  389933 main.go:141] libmachine: (newest-cni-929075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:e3:39", ip: ""} in network mk-newest-cni-929075: {Iface:virbr5 ExpiryTime:2023-10-02 13:14:24 +0000 UTC Type:0 Mac:52:54:00:2d:e3:39 Iaid: IPaddr:192.168.83.146 Prefix:24 Hostname:newest-cni-929075 Clientid:01:52:54:00:2d:e3:39}
	I1002 12:14:35.435681  389933 main.go:141] libmachine: (newest-cni-929075) DBG | domain newest-cni-929075 has defined IP address 192.168.83.146 and MAC address 52:54:00:2d:e3:39 in network mk-newest-cni-929075
	I1002 12:14:35.435851  389933 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 12:14:35.439940  389933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:14:35.452090  389933 localpath.go:92] copying /home/jenkins/minikube-integration/17340-332611/.minikube/client.crt -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.crt
	I1002 12:14:35.452242  389933 localpath.go:117] copying /home/jenkins/minikube-integration/17340-332611/.minikube/client.key -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.key
	I1002 12:14:35.454108  389933 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 12:14:35.455582  389933 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 12:14:35.455639  389933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:14:35.490623  389933 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 12:14:35.490699  389933 ssh_runner.go:195] Run: which lz4
	I1002 12:14:35.495248  389933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 12:14:35.499492  389933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 12:14:35.499526  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 12:14:37.296484  389933 crio.go:444] Took 1.801260 seconds to copy over tarball
	I1002 12:14:37.296579  389933 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 12:14:40.254011  389933 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.957400802s)
	I1002 12:14:40.254044  389933 crio.go:451] Took 2.957527 seconds to extract the tarball
	I1002 12:14:40.254055  389933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 12:14:40.300106  389933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 12:14:40.366315  389933 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 12:14:40.366342  389933 cache_images.go:84] Images are preloaded, skipping loading
	I1002 12:14:40.366432  389933 ssh_runner.go:195] Run: crio config
	I1002 12:14:40.436116  389933 cni.go:84] Creating CNI manager for ""
	I1002 12:14:40.436145  389933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:14:40.436170  389933 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1002 12:14:40.436203  389933 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.83.146 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-929075 NodeName:newest-cni-929075 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.83.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 12:14:40.436429  389933 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-929075"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 12:14:40.436566  389933 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-929075 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 12:14:40.436641  389933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 12:14:40.447026  389933 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 12:14:40.447110  389933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 12:14:40.456913  389933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I1002 12:14:40.474810  389933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 12:14:40.492305  389933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1002 12:14:40.509976  389933 ssh_runner.go:195] Run: grep 192.168.83.146	control-plane.minikube.internal$ /etc/hosts
	I1002 12:14:40.514103  389933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.146	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 12:14:40.528578  389933 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075 for IP: 192.168.83.146
	I1002 12:14:40.528617  389933 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.528819  389933 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 12:14:40.528888  389933 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 12:14:40.528990  389933 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/client.key
	I1002 12:14:40.529019  389933 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825
	I1002 12:14:40.529036  389933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 with IP's: [192.168.83.146 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 12:14:40.747346  389933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 ...
	I1002 12:14:40.747381  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825: {Name:mkbcbdde62ae8d8d5a9965d1ae02a1ea9e3c5119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.747594  389933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825 ...
	I1002 12:14:40.747618  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825: {Name:mk8fb7f1ceba877582e11948a715ef084120a6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.747726  389933 certs.go:337] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt.aeaa3825 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt
	I1002 12:14:40.747791  389933 certs.go:341] copying /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key.aeaa3825 -> /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key
	I1002 12:14:40.747842  389933 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key
	I1002 12:14:40.747860  389933 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt with IP's: []
	I1002 12:14:40.828193  389933 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt ...
	I1002 12:14:40.828225  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt: {Name:mk4e165e331add53ef25eb446ee6b7812b9e34fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.828412  389933 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key ...
	I1002 12:14:40.828428  389933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key: {Name:mk48dc53f3385190d1953597ac7942ece1f65bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:14:40.828679  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 12:14:40.828727  389933 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 12:14:40.828747  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 12:14:40.828778  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 12:14:40.828811  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 12:14:40.828848  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 12:14:40.828908  389933 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 12:14:40.829567  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 12:14:40.856999  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 12:14:40.883209  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 12:14:40.911694  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/newest-cni-929075/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 12:14:40.937500  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 12:14:40.962681  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 12:14:40.987988  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 12:14:41.011847  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 12:14:41.036472  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 12:14:41.061487  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 12:14:41.086651  389933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 12:14:41.112978  389933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 12:14:41.129585  389933 ssh_runner.go:195] Run: openssl version
	I1002 12:14:41.135636  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 12:14:41.145998  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.150869  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.150931  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 12:14:41.157093  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 12:14:41.168134  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 12:14:41.178716  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.183804  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.183869  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 12:14:41.190158  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 12:14:41.201213  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 12:14:41.212400  389933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.218016  389933 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.218087  389933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 12:14:41.225245  389933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 12:14:41.236391  389933 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 12:14:41.240827  389933 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 12:14:41.240877  389933 kubeadm.go:404] StartCluster: {Name:newest-cni-929075 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:newest-cni-929075 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.146 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 12:14:41.240958  389933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 12:14:41.241007  389933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 12:14:41.287386  389933 cri.go:89] found id: ""
	I1002 12:14:41.287479  389933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 12:14:41.297226  389933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:14:41.306856  389933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:14:41.316275  389933 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:14:41.316336  389933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:14:41.442096  389933 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:14:41.442378  389933 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:14:41.725465  389933 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:14:41.725598  389933 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:14:41.725775  389933 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:14:41.979444  389933 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:14:42.075418  389933 out.go:204]   - Generating certificates and keys ...
	I1002 12:14:42.075642  389933 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:14:42.075735  389933 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:14:42.103391  389933 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 12:14:42.297354  389933 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 12:14:42.392550  389933 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 12:14:42.708570  389933 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 12:14:42.934754  389933 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 12:14:42.934944  389933 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-929075] and IPs [192.168.83.146 127.0.0.1 ::1]
	I1002 12:14:42.982696  389933 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 12:14:42.982921  389933 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-929075] and IPs [192.168.83.146 127.0.0.1 ::1]
	I1002 12:14:43.034824  389933 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 12:14:43.311737  389933 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 12:14:43.499321  389933 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 12:14:43.499727  389933 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:14:43.647999  389933 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:14:43.894955  389933 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:14:44.088222  389933 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:14:44.221464  389933 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:14:44.222093  389933 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:14:44.225803  389933 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:54:52 UTC, ends at Mon 2023-10-02 12:14:48 UTC. --
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.910264281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0785da38-acbe-410d-8828-5ab1e72c2e50 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.915433821Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=1b91a480-99b5-4604-9631-a74182f736d5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.916272475Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9c5b5a2d-e464-477e-9b5c-bf830ee9c640,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248035887312368,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-02T12:00:35.548417238Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b664e90b18af15241022e0d8a9b8e71ff21f99398942b56eca5fbf782d155c24,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-6c2hc,Uid:020790e8-555b-4455-8e82-6ea49bb4212a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248035666952843,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-6c2hc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 020790e8-555b-4455-8e82-6ea49bb4212a
,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02T12:00:35.331170564Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&PodSandboxMetadata{Name:kube-proxy-sprhm,Uid:d032413b-07c5-4478-bbdf-93383f85f73d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248032896457472,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02T12:00:32.563610348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-st2bd,Uid
:6623fa3f-9a60-4364-bf08-7e84ae35d4b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248032841851571,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-02T12:00:32.511365442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-304121,Uid:bad74ed262ff474e1338c8ec0e95d7eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248011529271500,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95
d7eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bad74ed262ff474e1338c8ec0e95d7eb,kubernetes.io/config.seen: 2023-10-02T12:00:10.964266699Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-304121,Uid:583e04191a09ca04403f10ea67b5a093,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248011502494203,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67b5a093,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.143:8443,kubernetes.io/config.hash: 583e04191a09ca04403f10ea67b5a093,kubernetes.io/config.seen: 2023-10-02T12:00:10.964264843Z,kubernetes.io/config.source: file
,},RuntimeHandler:,},&PodSandbox{Id:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-304121,Uid:7cf4fde3c63df71c35302becd8e0a1e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248011498123550,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71c35302becd8e0a1e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7cf4fde3c63df71c35302becd8e0a1e4,kubernetes.io/config.seen: 2023-10-02T12:00:10.964265936Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-304121,Uid:cf54f7ca4952bccd9496d46885a9b99a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696248
011484344202,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.143:2379,kubernetes.io/config.hash: cf54f7ca4952bccd9496d46885a9b99a,kubernetes.io/config.seen: 2023-10-02T12:00:10.964261213Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1b91a480-99b5-4604-9631-a74182f736d5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.917244858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d76c520c-92a8-468d-8a9c-7f5d22a222d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.917298008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d76c520c-92a8-468d-8a9c-7f5d22a222d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.917547577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d76c520c-92a8-468d-8a9c-7f5d22a222d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.920090478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=87fccf43-05fd-4f92-a879-ac676b858ccc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.920395304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248887920385407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=87fccf43-05fd-4f92-a879-ac676b858ccc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.921432299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=07f5dc67-b9a1-40de-8e3c-7b0a67e45175 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.921473472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=07f5dc67-b9a1-40de-8e3c-7b0a67e45175 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.921616766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=07f5dc67-b9a1-40de-8e3c-7b0a67e45175 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.972636437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fcb1eb1b-c53d-4432-a973-41570baf8876 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.972696295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fcb1eb1b-c53d-4432-a973-41570baf8876 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.973915056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2703af32-b6ea-4bdf-a02f-dadf74d5ab80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.974394640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248887974377588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2703af32-b6ea-4bdf-a02f-dadf74d5ab80 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.975492991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25e96416-f609-48a7-bcf0-e5e77b7dbefb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.975541070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25e96416-f609-48a7-bcf0-e5e77b7dbefb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:47 no-preload-304121 crio[729]: time="2023-10-02 12:14:47.975781762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25e96416-f609-48a7-bcf0-e5e77b7dbefb name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.030532388Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b25c00a9-8e16-4356-ae69-c0a170a7ea0f name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.030599563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b25c00a9-8e16-4356-ae69-c0a170a7ea0f name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.031974475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9463985c-1e14-428d-ad09-6ee35d53c7a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.032460109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248888032447337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=9463985c-1e14-428d-ad09-6ee35d53c7a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.033105119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2424af78-b2b2-4847-bc0f-57de97b14c86 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.033153012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2424af78-b2b2-4847-bc0f-57de97b14c86 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:48 no-preload-304121 crio[729]: time="2023-10-02 12:14:48.033320195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58,PodSandboxId:ec313f0f0ab1dc9f814119a453653e9e0ae9370321a3fb6248e2f775633b7c69,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696248036592853251,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5b5a2d-e464-477e-9b5c-bf830ee9c640,},Annotations:map[string]string{io.kubernetes.container.hash: d8abb607,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e,PodSandboxId:ecbb6c9a8e481d039c79978366d66e15fbedf7485f6ea8b3179bd6b8cc4abece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696248035247541993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sprhm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d032413b-07c5-4478-bbdf-93383f85f73d,},Annotations:map[string]string{io.kubernetes.container.hash: a8552626,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24,PodSandboxId:2a75f65496e4cfdda7933426a7c4b62b12f4073561c6ef9bb49c9ec1b1dc5ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696248034681177336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-st2bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6623fa3f-9a60-4364-bf08-7e84ae35d4b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec8c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904,PodSandboxId:32c3582344c54a990c237d8bee99c855b64a63fe2a327e609fc1023848bab57e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696248012521472249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf54f7ca4952bccd9496d46885a9b99a,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6f95869f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7,PodSandboxId:53f5e0765e7c0e67157a95f8d18ed6dc0eb670bb8ad854a3ccd4f9ae809f1919,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696248012462424201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf4fde3c63df71
c35302becd8e0a1e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d,PodSandboxId:58547f6befd05beae984b861939c5e2d2bdd14b8246e8b1e241a63293bbe179c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696248012152633049,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 583e04191a09ca04403f10ea67
b5a093,},Annotations:map[string]string{io.kubernetes.container.hash: cb894288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9,PodSandboxId:77f10f7dc88d2b7ed5140ca5bd7318c11e8af29ff90106ffc5a0a04666d3b783,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696248011996250640,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-304121,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad74ed262ff474e1338c8ec0e95d7eb,},An
notations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2424af78-b2b2-4847-bc0f-57de97b14c86 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e172cd6aafba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   ec313f0f0ab1d       storage-provisioner
	587b3cddeef4c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   14 minutes ago      Running             kube-proxy                0                   ecbb6c9a8e481       kube-proxy-sprhm
	9a9443897a0d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   2a75f65496e4c       coredns-5dd5756b68-st2bd
	eef0f8b845289       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   32c3582344c54       etcd-no-preload-304121
	ea6899c47560e       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   14 minutes ago      Running             kube-controller-manager   2                   53f5e0765e7c0       kube-controller-manager-no-preload-304121
	b97038dd3f301       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   14 minutes ago      Running             kube-apiserver            2                   58547f6befd05       kube-apiserver-no-preload-304121
	b0e8d031a5174       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   14 minutes ago      Running             kube-scheduler            2                   77f10f7dc88d2       kube-scheduler-no-preload-304121
	
	* 
	* ==> coredns [9a9443897a0d50a81f429a9d66aebda04c07159512952e3a89dc7f9405a51d24] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38339 - 61925 "HINFO IN 947553240253786351.7225225166883565728. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014792553s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-304121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-304121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=no-preload-304121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 12:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-304121
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 12:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:10:53 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:10:53 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:10:53 +0000   Mon, 02 Oct 2023 12:00:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:10:53 +0000   Mon, 02 Oct 2023 12:00:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    no-preload-304121
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e666d702cd1a476db2e4ede71244eec6
	  System UUID:                e666d702-cd1a-476d-b2e4-ede71244eec6
	  Boot ID:                    edd92e65-3aab-40f3-a2ba-c9b9a2a278d4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-st2bd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-304121                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-304121             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-304121    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-sprhm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-304121             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-6c2hc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-304121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-304121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-304121 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-304121 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-304121 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-304121 event: Registered Node no-preload-304121 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076201] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.917880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238771] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149063] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.575254] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 2 11:55] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.115809] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.141096] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.114280] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.264121] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +31.895669] systemd-fstab-generator[1233]: Ignoring "noauto" for root device
	[ +19.888493] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 2 12:00] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[  +8.813071] systemd-fstab-generator[4165]: Ignoring "noauto" for root device
	[ +13.576539] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [eef0f8b845289f20cedb980bbdd46e4ba218f355ef6e70326cf177d3cceb7904] <==
	* {"level":"info","ts":"2023-10-02T12:00:14.001593Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","added-peer-id":"be0eebdc09990bfd","added-peer-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2023-10-02T12:00:14.001635Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.001652Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.001659Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T12:00:14.156118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgPreVoteResp from be0eebdc09990bfd at term 1"}
	{"level":"info","ts":"2023-10-02T12:00:14.156302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgVoteResp from be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became leader at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.156389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be0eebdc09990bfd elected leader be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2023-10-02T12:00:14.161251Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"be0eebdc09990bfd","local-member-attributes":"{Name:no-preload-304121 ClientURLs:[https://192.168.39.143:2379]}","request-path":"/0/members/be0eebdc09990bfd/attributes","cluster-id":"6857887556ef56db","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T12:00:14.161434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:00:14.162159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T12:00:14.162843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	{"level":"info","ts":"2023-10-02T12:00:14.162968Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.180691Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T12:00:14.18087Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T12:00:14.167989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T12:00:14.181755Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.182154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:00:14.182307Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T12:10:14.393399Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2023-10-02T12:10:14.396367Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.489253ms","hash":1330101004}
	{"level":"info","ts":"2023-10-02T12:10:14.396446Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1330101004,"revision":678,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  12:14:48 up 20 min,  0 users,  load average: 0.13, 0.20, 0.21
	Linux no-preload-304121 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b97038dd3f301df983c530d45dad98f3c95a3a4624069ea9bcfcb0e970ffaa7d] <==
	* W1002 12:10:17.129547       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:10:17.129703       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:10:17.129829       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:10:17.129621       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:10:17.130270       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:10:17.131928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:11:15.955217       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:11:17.130658       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:11:17.130749       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:11:17.130841       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:11:17.132910       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:11:17.133143       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:11:17.133179       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:12:15.955373       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1002 12:13:15.954767       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1002 12:13:17.131615       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:13:17.131724       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1002 12:13:17.131789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 12:13:17.134069       1 handler_proxy.go:93] no RequestInfo found in the context
	E1002 12:13:17.134230       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:13:17.134238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:14:15.955192       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [ea6899c47560e43a818786d66b58efdea1e54569cb91c811e5887187815f6ed7] <==
	* I1002 12:09:03.043849       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:09:32.672774       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:09:33.053469       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:02.679994       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:03.063967       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:10:32.686493       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:10:33.074535       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:11:02.693481       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:03.085508       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:11:32.699928       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:11:33.096308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:11:57.712384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="455.358µs"
	E1002 12:12:02.706259       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:03.108222       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 12:12:10.708571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.582µs"
	E1002 12:12:32.711644       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:12:33.119193       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:02.718463       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:03.128255       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:13:32.725152       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:13:33.137482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:02.738952       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:03.145864       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 12:14:32.746414       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1002 12:14:33.156135       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [587b3cddeef4cd70d012f9d07cd05ed5cab79768992a248b74b4fa3a6004790e] <==
	* I1002 12:00:35.828075       1 server_others.go:69] "Using iptables proxy"
	I1002 12:00:35.852368       1 node.go:141] Successfully retrieved node IP: 192.168.39.143
	I1002 12:00:36.485825       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 12:00:36.485868       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 12:00:36.499417       1 server_others.go:152] "Using iptables Proxier"
	I1002 12:00:36.499508       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 12:00:36.499882       1 server.go:846] "Version info" version="v1.28.2"
	I1002 12:00:36.499894       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 12:00:36.508146       1 config.go:188] "Starting service config controller"
	I1002 12:00:36.508518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 12:00:36.510054       1 config.go:97] "Starting endpoint slice config controller"
	I1002 12:00:36.510111       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 12:00:36.510639       1 config.go:315] "Starting node config controller"
	I1002 12:00:36.512509       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 12:00:36.609803       1 shared_informer.go:318] Caches are synced for service config
	I1002 12:00:36.610995       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 12:00:36.612965       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b0e8d031a517413a16fda2adabc04ca2f730fb541cf8fe025a18de5bfa8595a9] <==
	* W1002 12:00:17.002316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 12:00:17.002386       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 12:00:17.076757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 12:00:17.076833       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 12:00:17.105133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 12:00:17.105182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 12:00:17.112341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.112429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.142172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 12:00:17.142255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 12:00:17.189433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.189496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.206845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 12:00:17.206941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 12:00:17.356487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 12:00:17.356601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 12:00:17.365169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.365250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.366412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 12:00:17.366476       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 12:00:17.376241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 12:00:17.376285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 12:00:17.408212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 12:00:17.408268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1002 12:00:17.871792       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:54:52 UTC, ends at Mon 2023-10-02 12:14:48 UTC. --
	Oct 02 12:11:57 no-preload-304121 kubelet[4172]: E1002 12:11:57.691824    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:12:10 no-preload-304121 kubelet[4172]: E1002 12:12:10.691285    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:12:19 no-preload-304121 kubelet[4172]: E1002 12:12:19.803198    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:12:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:12:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:12:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:12:23 no-preload-304121 kubelet[4172]: E1002 12:12:23.692263    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:12:38 no-preload-304121 kubelet[4172]: E1002 12:12:38.691834    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:12:51 no-preload-304121 kubelet[4172]: E1002 12:12:51.691136    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:13:03 no-preload-304121 kubelet[4172]: E1002 12:13:03.692315    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:13:17 no-preload-304121 kubelet[4172]: E1002 12:13:17.692680    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:13:19 no-preload-304121 kubelet[4172]: E1002 12:13:19.802683    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:13:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:13:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:13:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:13:29 no-preload-304121 kubelet[4172]: E1002 12:13:29.693173    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:13:41 no-preload-304121 kubelet[4172]: E1002 12:13:41.692861    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:13:55 no-preload-304121 kubelet[4172]: E1002 12:13:55.690917    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:14:09 no-preload-304121 kubelet[4172]: E1002 12:14:09.691563    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:14:19 no-preload-304121 kubelet[4172]: E1002 12:14:19.808859    4172 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 12:14:19 no-preload-304121 kubelet[4172]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 12:14:19 no-preload-304121 kubelet[4172]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 12:14:19 no-preload-304121 kubelet[4172]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 12:14:24 no-preload-304121 kubelet[4172]: E1002 12:14:24.691263    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	Oct 02 12:14:36 no-preload-304121 kubelet[4172]: E1002 12:14:36.691706    4172 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6c2hc" podUID="020790e8-555b-4455-8e82-6ea49bb4212a"
	
	* 
	* ==> storage-provisioner [6e172cd6aafba5f002d03f4e61064bc228967577aa9e377fe0b2bb9587f62d58] <==
	* I1002 12:00:36.743348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 12:00:36.759783       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 12:00:36.759882       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 12:00:36.771371       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 12:00:36.772119       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a698911-938c-4466-9c61-c594ff009531", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6 became leader
	I1002 12:00:36.772464       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6!
	I1002 12:00:36.873311       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-304121_1b567081-1828-4e3b-8959-6db51c8b3cb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-304121 -n no-preload-304121
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-304121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6c2hc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc: exit status 1 (77.479812ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6c2hc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-304121 describe pod metrics-server-57f55c9bc5-6c2hc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (241.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 12:10:54.840450  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 12:11:30.122139  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 12:11:55.306409  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 12:12:07.587067  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 12:12:22.001321  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 12:13:30.519287  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-749860 -n old-k8s-version-749860
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-02 12:14:03.288763458 +0000 UTC m=+5893.860449500
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-749860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-749860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-749860 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-749860 logs -n 25
E1002 12:14:04.535299  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-749860 logs -n 25: (1.707439594s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-124285 sudo cat                              | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo                                  | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo find                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-124285 sudo crio                             | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-124285                                       | bridge-124285                | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-448198 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:45 UTC |
	|         | disable-driver-mounts-448198                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:45 UTC | 02 Oct 23 11:47 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-304121             | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-749860        | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC | 02 Oct 23 11:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-487027            | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-777999  | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC | 02 Oct 23 11:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:47 UTC |                     |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-304121                  | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:48 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-304121                                   | no-preload-304121            | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-749860             | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-749860                              | old-k8s-version-749860       | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 12:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-487027                 | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-487027                                  | embed-certs-487027           | jenkins | v1.31.2 | 02 Oct 23 11:49 UTC | 02 Oct 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-777999       | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-777999 | jenkins | v1.31.2 | 02 Oct 23 11:50 UTC | 02 Oct 23 11:59 UTC |
	|         | default-k8s-diff-port-777999                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 11:50:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 11:50:14.045882  384965 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:50:14.045995  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046005  384965 out.go:309] Setting ErrFile to fd 2...
	I1002 11:50:14.046009  384965 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:50:14.046207  384965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:50:14.046807  384965 out.go:303] Setting JSON to false
	I1002 11:50:14.047867  384965 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9160,"bootTime":1696238254,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:50:14.047937  384965 start.go:138] virtualization: kvm guest
	I1002 11:50:14.050148  384965 out.go:177] * [default-k8s-diff-port-777999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:50:14.051736  384965 notify.go:220] Checking for updates...
	I1002 11:50:14.051738  384965 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:50:14.053419  384965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:50:14.055001  384965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:50:14.056531  384965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:50:14.057828  384965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:50:14.059154  384965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:50:14.060884  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:50:14.061318  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.061365  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.077285  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36537
	I1002 11:50:14.077670  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.078164  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.078184  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.078590  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.078766  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.079011  384965 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:50:14.079285  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:50:14.079321  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:50:14.093519  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 11:50:14.093897  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:50:14.094331  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:50:14.094375  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:50:14.094689  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:50:14.094875  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:50:14.127852  384965 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 11:50:14.129579  384965 start.go:298] selected driver: kvm2
	I1002 11:50:14.129589  384965 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.129734  384965 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:50:14.130441  384965 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.130517  384965 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 11:50:14.145313  384965 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 11:50:14.145678  384965 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 11:50:14.145737  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:50:14.145747  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:50:14.145754  384965 start_flags.go:321] config:
	{Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-77799
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:50:14.145885  384965 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 11:50:14.147697  384965 out.go:177] * Starting control plane node default-k8s-diff-port-777999 in cluster default-k8s-diff-port-777999
	I1002 11:50:14.518571  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:14.149188  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:50:14.149229  384965 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 11:50:14.149243  384965 cache.go:57] Caching tarball of preloaded images
	I1002 11:50:14.149342  384965 preload.go:174] Found /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 11:50:14.149355  384965 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 11:50:14.149469  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:50:14.149690  384965 start.go:365] acquiring machines lock for default-k8s-diff-port-777999: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:50:17.590603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:23.670608  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:26.742637  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:32.822640  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:35.894704  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:41.974682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:45.046703  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:51.126633  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:50:54.198624  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:00.278622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:03.350650  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:09.430627  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:12.502639  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:18.582668  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:21.654622  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:27.734588  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:30.806674  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:36.886711  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:39.958677  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:46.038638  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:49.110583  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:55.190669  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:51:58.262632  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:04.342658  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:07.414733  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:13.494648  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:16.566610  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:22.646664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:25.718682  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:31.798673  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:34.870620  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:40.950664  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:44.022695  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:50.102629  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:53.174698  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:52:59.254603  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:02.326684  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:08.406661  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:11.478769  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:17.558670  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:20.630696  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:26.710600  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:29.782676  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:35.862655  384344 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.143:22: connect: no route to host
	I1002 11:53:38.867149  384505 start.go:369] acquired machines lock for "old-k8s-version-749860" in 4m24.621828644s
	I1002 11:53:38.867251  384505 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:38.867260  384505 fix.go:54] fixHost starting: 
	I1002 11:53:38.867725  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:38.867761  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:38.882900  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1002 11:53:38.883484  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:38.883950  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:53:38.883974  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:38.884318  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:38.884530  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:38.884688  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:53:38.886067  384505 fix.go:102] recreateIfNeeded on old-k8s-version-749860: state=Stopped err=<nil>
	I1002 11:53:38.886102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	W1002 11:53:38.886288  384505 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:38.888401  384505 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-749860" ...
	I1002 11:53:38.889752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Start
	I1002 11:53:38.889924  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring networks are active...
	I1002 11:53:38.890638  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network default is active
	I1002 11:53:38.890980  384505 main.go:141] libmachine: (old-k8s-version-749860) Ensuring network mk-old-k8s-version-749860 is active
	I1002 11:53:38.891314  384505 main.go:141] libmachine: (old-k8s-version-749860) Getting domain xml...
	I1002 11:53:38.892257  384505 main.go:141] libmachine: (old-k8s-version-749860) Creating domain...
	I1002 11:53:38.864675  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:38.864716  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:53:38.866979  384344 machine.go:91] provisioned docker machine in 4m37.398507067s
	I1002 11:53:38.867033  384344 fix.go:56] fixHost completed within 4m37.419547722s
	I1002 11:53:38.867039  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 4m37.419568347s
	W1002 11:53:38.867080  384344 start.go:688] error starting host: provision: host is not running
	W1002 11:53:38.867230  384344 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1002 11:53:38.867240  384344 start.go:703] Will try again in 5 seconds ...
	I1002 11:53:40.120018  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting to get IP...
	I1002 11:53:40.120927  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.121258  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.121366  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.121241  385500 retry.go:31] will retry after 204.223254ms: waiting for machine to come up
	I1002 11:53:40.326895  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.327332  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.327351  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.327293  385500 retry.go:31] will retry after 300.58131ms: waiting for machine to come up
	I1002 11:53:40.629931  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:40.630293  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:40.630324  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:40.630247  385500 retry.go:31] will retry after 460.804681ms: waiting for machine to come up
	I1002 11:53:41.092440  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.092887  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.092914  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.092838  385500 retry.go:31] will retry after 573.592817ms: waiting for machine to come up
	I1002 11:53:41.668507  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:41.668916  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:41.668955  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:41.668879  385500 retry.go:31] will retry after 647.261387ms: waiting for machine to come up
	I1002 11:53:42.317738  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.318193  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.318228  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.318135  385500 retry.go:31] will retry after 643.115699ms: waiting for machine to come up
	I1002 11:53:42.963169  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:42.963572  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:42.963595  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:42.963517  385500 retry.go:31] will retry after 1.059074571s: waiting for machine to come up
	I1002 11:53:44.024372  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:44.024750  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:44.024785  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:44.024703  385500 retry.go:31] will retry after 1.142402067s: waiting for machine to come up
	I1002 11:53:43.868857  384344 start.go:365] acquiring machines lock for no-preload-304121: {Name:mk24d6a7625ed9b95214546922c49ac1f8f8ffcb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 11:53:45.169146  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:45.169470  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:45.169509  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:45.169430  385500 retry.go:31] will retry after 1.244757741s: waiting for machine to come up
	I1002 11:53:46.415640  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:46.416049  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:46.416078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:46.416030  385500 retry.go:31] will retry after 2.066150597s: waiting for machine to come up
	I1002 11:53:48.483477  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:48.483998  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:48.484023  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:48.483921  385500 retry.go:31] will retry after 2.521584671s: waiting for machine to come up
	I1002 11:53:51.008090  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:51.008535  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:51.008565  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:51.008455  385500 retry.go:31] will retry after 2.896131667s: waiting for machine to come up
	I1002 11:53:53.905835  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:53.906274  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | unable to find current IP address of domain old-k8s-version-749860 in network mk-old-k8s-version-749860
	I1002 11:53:53.906309  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | I1002 11:53:53.906207  385500 retry.go:31] will retry after 3.463250216s: waiting for machine to come up
	I1002 11:53:58.755219  384787 start.go:369] acquired machines lock for "embed-certs-487027" in 4m10.971064405s
	I1002 11:53:58.755286  384787 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:53:58.755301  384787 fix.go:54] fixHost starting: 
	I1002 11:53:58.755691  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:53:58.755733  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:53:58.772186  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38267
	I1002 11:53:58.772591  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:53:58.773071  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:53:58.773101  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:53:58.773409  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:53:58.773585  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:53:58.773710  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:53:58.775231  384787 fix.go:102] recreateIfNeeded on embed-certs-487027: state=Stopped err=<nil>
	I1002 11:53:58.775273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	W1002 11:53:58.775449  384787 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:53:58.778132  384787 out.go:177] * Restarting existing kvm2 VM for "embed-certs-487027" ...
	I1002 11:53:57.373844  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374176  384505 main.go:141] libmachine: (old-k8s-version-749860) Found IP for machine: 192.168.83.82
	I1002 11:53:57.374195  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserving static IP address...
	I1002 11:53:57.374208  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has current primary IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.374680  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.374711  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | skip adding static IP to network mk-old-k8s-version-749860 - found existing host DHCP lease matching {name: "old-k8s-version-749860", mac: "52:54:00:d4:c3:b0", ip: "192.168.83.82"}
	I1002 11:53:57.374725  384505 main.go:141] libmachine: (old-k8s-version-749860) Reserved static IP address: 192.168.83.82
	I1002 11:53:57.374741  384505 main.go:141] libmachine: (old-k8s-version-749860) Waiting for SSH to be available...
	I1002 11:53:57.374758  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Getting to WaitForSSH function...
	I1002 11:53:57.377368  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377757  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.377791  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.377890  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH client type: external
	I1002 11:53:57.377933  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa (-rw-------)
	I1002 11:53:57.377976  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:53:57.377995  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | About to run SSH command:
	I1002 11:53:57.378008  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | exit 0
	I1002 11:53:57.474496  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | SSH cmd err, output: <nil>: 
	I1002 11:53:57.474881  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetConfigRaw
	I1002 11:53:57.475581  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.478078  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478423  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.478464  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.478679  384505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/config.json ...
	I1002 11:53:57.478876  384505 machine.go:88] provisioning docker machine ...
	I1002 11:53:57.478895  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:57.479118  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479286  384505 buildroot.go:166] provisioning hostname "old-k8s-version-749860"
	I1002 11:53:57.479300  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.479509  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.481462  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481768  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.481805  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.481935  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.482138  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482280  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.482438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.482611  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.483038  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.483051  384505 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-749860 && echo "old-k8s-version-749860" | sudo tee /etc/hostname
	I1002 11:53:57.622724  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-749860
	
	I1002 11:53:57.622760  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.626222  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626663  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.626707  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.626840  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:57.627102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627297  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:57.627513  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:57.627678  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:57.628068  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:57.628089  384505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-749860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-749860/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-749860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:53:57.767587  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:53:57.767664  384505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:53:57.767708  384505 buildroot.go:174] setting up certificates
	I1002 11:53:57.767721  384505 provision.go:83] configureAuth start
	I1002 11:53:57.767734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetMachineName
	I1002 11:53:57.768045  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:57.771158  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771591  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.771620  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.771825  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:57.774031  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774444  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:57.774523  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:57.774529  384505 provision.go:138] copyHostCerts
	I1002 11:53:57.774608  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:53:57.774623  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:53:57.774695  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:53:57.774787  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:53:57.774797  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:53:57.774821  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:53:57.774884  384505 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:53:57.774891  384505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:53:57.774912  384505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:53:57.774970  384505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-749860 san=[192.168.83.82 192.168.83.82 localhost 127.0.0.1 minikube old-k8s-version-749860]
	I1002 11:53:58.003098  384505 provision.go:172] copyRemoteCerts
	I1002 11:53:58.003163  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:53:58.003190  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.005944  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006310  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.006345  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.006482  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.006734  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.006887  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.007049  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.099927  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:53:58.123424  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 11:53:58.147578  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:53:58.171190  384505 provision.go:86] duration metric: configureAuth took 403.448571ms
	I1002 11:53:58.171228  384505 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:53:58.171440  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:53:58.171575  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.174314  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174684  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.174723  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.174860  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.175078  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175274  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.175409  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.175596  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.175908  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.175923  384505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:53:58.491028  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:53:58.491062  384505 machine.go:91] provisioned docker machine in 1.012168334s
	I1002 11:53:58.491072  384505 start.go:300] post-start starting for "old-k8s-version-749860" (driver="kvm2")
	I1002 11:53:58.491085  384505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:53:58.491106  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.491521  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:53:58.491558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.494009  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494382  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.494415  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.494546  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.494753  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.494903  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.495037  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.588465  384505 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:53:58.592844  384505 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:53:58.592872  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:53:58.592940  384505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:53:58.593047  384505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:53:58.593171  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:53:58.601583  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:53:58.624453  384505 start.go:303] post-start completed in 133.365398ms
	I1002 11:53:58.624486  384505 fix.go:56] fixHost completed within 19.757224844s
	I1002 11:53:58.624511  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.627104  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627476  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.627534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.627695  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.627913  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628105  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.628253  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.628426  384505 main.go:141] libmachine: Using SSH client type: native
	I1002 11:53:58.628749  384505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.82 22 <nil> <nil>}
	I1002 11:53:58.628762  384505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:53:58.755032  384505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247638.703145377
	
	I1002 11:53:58.755056  384505 fix.go:206] guest clock: 1696247638.703145377
	I1002 11:53:58.755066  384505 fix.go:219] Guest: 2023-10-02 11:53:58.703145377 +0000 UTC Remote: 2023-10-02 11:53:58.624490602 +0000 UTC m=+284.515069275 (delta=78.654775ms)
	I1002 11:53:58.755092  384505 fix.go:190] guest clock delta is within tolerance: 78.654775ms
	I1002 11:53:58.755098  384505 start.go:83] releasing machines lock for "old-k8s-version-749860", held for 19.887910329s
	I1002 11:53:58.755126  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.755438  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:53:58.758172  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758431  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.758467  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.758673  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759288  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759466  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:53:58.759560  384505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:53:58.759620  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.759717  384505 ssh_runner.go:195] Run: cat /version.json
	I1002 11:53:58.759748  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:53:58.762471  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762618  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762847  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762879  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.762911  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:53:58.762943  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:53:58.763162  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763185  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:53:58.763347  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763363  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:53:58.763487  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763661  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.763671  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:53:58.763828  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:53:58.880436  384505 ssh_runner.go:195] Run: systemctl --version
	I1002 11:53:58.886540  384505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:53:59.035347  384505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:53:59.041510  384505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:53:59.041604  384505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:53:59.056030  384505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:53:59.056062  384505 start.go:469] detecting cgroup driver to use...
	I1002 11:53:59.056147  384505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:53:59.068680  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:53:59.080770  384505 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:53:59.080823  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:53:59.093059  384505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:53:59.106603  384505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:53:59.223135  384505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:53:59.364085  384505 docker.go:213] disabling docker service ...
	I1002 11:53:59.364161  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:53:59.378131  384505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:53:59.390380  384505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:53:59.522236  384505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:53:59.663336  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:53:59.677221  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:53:59.694283  384505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1002 11:53:59.694380  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.703409  384505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:53:59.703481  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.712316  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.721255  384505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:53:59.731204  384505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:53:59.741152  384505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:53:59.748978  384505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:53:59.749036  384505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:53:59.761692  384505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:53:59.770571  384505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:53:59.882809  384505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:00.046741  384505 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:00.046843  384505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:00.051911  384505 start.go:537] Will wait 60s for crictl version
	I1002 11:54:00.051988  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:00.055847  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:00.099999  384505 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:00.100084  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.155271  384505 ssh_runner.go:195] Run: crio --version
	I1002 11:54:00.202213  384505 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1002 11:53:58.780030  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Start
	I1002 11:53:58.780201  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring networks are active...
	I1002 11:53:58.780857  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network default is active
	I1002 11:53:58.781206  384787 main.go:141] libmachine: (embed-certs-487027) Ensuring network mk-embed-certs-487027 is active
	I1002 11:53:58.781581  384787 main.go:141] libmachine: (embed-certs-487027) Getting domain xml...
	I1002 11:53:58.782269  384787 main.go:141] libmachine: (embed-certs-487027) Creating domain...
	I1002 11:54:00.079808  384787 main.go:141] libmachine: (embed-certs-487027) Waiting to get IP...
	I1002 11:54:00.080676  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.081052  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.081202  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.081070  385615 retry.go:31] will retry after 291.88616ms: waiting for machine to come up
	I1002 11:54:00.374941  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.375493  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.375526  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.375441  385615 retry.go:31] will retry after 315.924643ms: waiting for machine to come up
	I1002 11:54:00.693196  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:00.693804  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:00.693840  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:00.693754  385615 retry.go:31] will retry after 473.967353ms: waiting for machine to come up
	I1002 11:54:01.169616  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.170137  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.170168  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.170099  385615 retry.go:31] will retry after 490.884713ms: waiting for machine to come up
	I1002 11:54:01.662881  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:01.663427  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:01.663459  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:01.663380  385615 retry.go:31] will retry after 590.285109ms: waiting for machine to come up
	I1002 11:54:02.255409  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.256020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.256048  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.255956  385615 retry.go:31] will retry after 586.734935ms: waiting for machine to come up
	I1002 11:54:00.203709  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetIP
	I1002 11:54:00.206822  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207269  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:54:00.207308  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:54:00.207533  384505 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:00.211596  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:00.224503  384505 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 11:54:00.224558  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:00.267915  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:00.267986  384505 ssh_runner.go:195] Run: which lz4
	I1002 11:54:00.272086  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:00.276281  384505 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:00.276322  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1002 11:54:02.169153  384505 crio.go:444] Took 1.897111 seconds to copy over tarball
	I1002 11:54:02.169248  384505 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:02.844615  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:02.845091  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:02.845129  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:02.845049  385615 retry.go:31] will retry after 765.906555ms: waiting for machine to come up
	I1002 11:54:03.612904  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:03.613374  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:03.613515  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:03.613306  385615 retry.go:31] will retry after 1.240249135s: waiting for machine to come up
	I1002 11:54:04.855370  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:04.855832  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:04.855858  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:04.855785  385615 retry.go:31] will retry after 1.741253702s: waiting for machine to come up
	I1002 11:54:06.599800  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:06.600279  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:06.600307  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:06.600221  385615 retry.go:31] will retry after 1.945988456s: waiting for machine to come up
	I1002 11:54:05.257359  384505 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088072266s)
	I1002 11:54:05.257395  384505 crio.go:451] Took 3.088214 seconds to extract the tarball
	I1002 11:54:05.257408  384505 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:05.296693  384505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:05.347131  384505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1002 11:54:05.347156  384505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:54:05.347231  384505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.347239  384505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.347291  384505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.347523  384505 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.347545  384505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.347590  384505 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 11:54:05.347712  384505 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.347797  384505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349061  384505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.349109  384505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:05.349136  384505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.349165  384505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.349072  384505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.349076  384505 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.349075  384505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.349490  384505 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 11:54:05.494581  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.497665  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.499676  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.503426  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 11:54:05.504502  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.507776  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.511534  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.589967  384505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1002 11:54:05.590038  384505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.590101  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.653382  384505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1002 11:54:05.653450  384505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.653539  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674391  384505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1002 11:54:05.674430  384505 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 11:54:05.674447  384505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.674467  384505 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1002 11:54:05.674508  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674498  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.674583  384505 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1002 11:54:05.674621  384505 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.674671  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.676359  384505 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1002 11:54:05.676390  384505 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.676425  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.680824  384505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1002 11:54:05.680858  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1002 11:54:05.680871  384505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.680894  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1002 11:54:05.680905  384505 ssh_runner.go:195] Run: which crictl
	I1002 11:54:05.682827  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1002 11:54:05.690404  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1002 11:54:05.690496  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1002 11:54:05.690562  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1002 11:54:05.810224  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1002 11:54:05.840439  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1002 11:54:05.840472  384505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1002 11:54:05.840535  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 11:54:05.840544  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1002 11:54:05.840583  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1002 11:54:05.840643  384505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.840663  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1002 11:54:05.874997  384505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1002 11:54:05.875049  384505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1002 11:54:05.875079  384505 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 11:54:05.875136  384505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1002 11:54:06.317119  384505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:54:07.926701  384505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.609537315s)
	I1002 11:54:07.926715  384505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.051548545s)
	I1002 11:54:07.926786  384505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 11:54:07.926855  384505 cache_images.go:92] LoadImages completed in 2.579686998s
	W1002 11:54:07.926953  384505 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1002 11:54:07.927077  384505 ssh_runner.go:195] Run: crio config
	I1002 11:54:07.991410  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:07.991433  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:07.991452  384505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:07.991473  384505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.82 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-749860 NodeName:old-k8s-version-749860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 11:54:07.991665  384505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-749860"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-749860
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.82:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:07.991752  384505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-749860 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:07.991814  384505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1002 11:54:08.002239  384505 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:08.002313  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:08.012375  384505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1002 11:54:08.031554  384505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:08.050801  384505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1002 11:54:08.068326  384505 ssh_runner.go:195] Run: grep 192.168.83.82	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:08.072798  384505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:08.085261  384505 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860 for IP: 192.168.83.82
	I1002 11:54:08.085320  384505 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:08.085511  384505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:08.085555  384505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:08.085682  384505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/client.key
	I1002 11:54:08.085771  384505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key.bc78c23c
	I1002 11:54:08.085823  384505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key
	I1002 11:54:08.085973  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:08.086020  384505 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:08.086035  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:08.086071  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:08.086101  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:08.086163  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:08.086237  384505 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:08.087038  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:08.111230  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:08.133515  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:08.157382  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/old-k8s-version-749860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:08.180186  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:08.210075  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:08.232068  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:08.253873  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:08.276866  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:08.300064  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:08.322265  384505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:08.346808  384505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:08.367194  384505 ssh_runner.go:195] Run: openssl version
	I1002 11:54:08.374709  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:08.389274  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395338  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.395420  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:08.401338  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:08.412228  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:08.423293  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428146  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.428213  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:08.434177  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:08.449342  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:08.463678  384505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468723  384505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.468795  384505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:08.476711  384505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:08.492116  384505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:08.498510  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:08.504961  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:08.513012  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:08.520620  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:08.528578  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:08.534685  384505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:08.541262  384505 kubeadm.go:404] StartCluster: {Name:old-k8s-version-749860 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-749860 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:08.541401  384505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:08.541474  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:08.579821  384505 cri.go:89] found id: ""
	I1002 11:54:08.579899  384505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:08.590328  384505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:08.590359  384505 kubeadm.go:636] restartCluster start
	I1002 11:54:08.590419  384505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:08.600034  384505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.601660  384505 kubeconfig.go:92] found "old-k8s-version-749860" server: "https://192.168.83.82:8443"
	I1002 11:54:08.605641  384505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:08.615274  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.615340  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.630952  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.630979  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:08.631032  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:08.642433  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:08.547687  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:08.548295  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:08.548331  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:08.548238  385615 retry.go:31] will retry after 2.817726625s: waiting for machine to come up
	I1002 11:54:11.367346  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:11.367909  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:11.367943  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:11.367859  385615 retry.go:31] will retry after 3.066326625s: waiting for machine to come up
	I1002 11:54:09.142569  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.155937  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:09.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:09.642637  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:09.655230  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.142683  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.142769  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.155206  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:10.642757  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:10.642857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:10.659345  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.142860  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.142955  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.158336  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:11.642849  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:11.642934  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:11.658819  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.143538  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.143645  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.159984  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:12.642536  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:12.642679  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:12.658031  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.143496  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.143607  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.159279  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:13.643567  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:13.643659  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:13.657189  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.435299  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:14.435744  384787 main.go:141] libmachine: (embed-certs-487027) DBG | unable to find current IP address of domain embed-certs-487027 in network mk-embed-certs-487027
	I1002 11:54:14.435777  384787 main.go:141] libmachine: (embed-certs-487027) DBG | I1002 11:54:14.435699  385615 retry.go:31] will retry after 3.446313194s: waiting for machine to come up
	I1002 11:54:19.007568  384965 start.go:369] acquired machines lock for "default-k8s-diff-port-777999" in 4m4.857829673s
	I1002 11:54:19.007726  384965 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:19.007735  384965 fix.go:54] fixHost starting: 
	I1002 11:54:19.008181  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:19.008225  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:19.025286  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1002 11:54:19.025755  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:19.026243  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:54:19.026265  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:19.026648  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:19.026869  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:19.027056  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:54:19.028773  384965 fix.go:102] recreateIfNeeded on default-k8s-diff-port-777999: state=Stopped err=<nil>
	I1002 11:54:19.028799  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	W1002 11:54:19.028984  384965 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:19.031466  384965 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-777999" ...
	I1002 11:54:19.033140  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Start
	I1002 11:54:19.033346  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring networks are active...
	I1002 11:54:19.034009  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network default is active
	I1002 11:54:19.034440  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Ensuring network mk-default-k8s-diff-port-777999 is active
	I1002 11:54:19.034843  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Getting domain xml...
	I1002 11:54:19.035519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Creating domain...
	I1002 11:54:14.142550  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.142618  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.154742  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:14.643429  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:14.643522  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:14.656075  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.142577  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.142669  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.154422  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:15.643360  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:15.643450  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:15.655255  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.142806  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.142948  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.154896  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:16.643505  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:16.643581  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:16.655413  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.142981  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.143087  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.156411  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:17.642996  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:17.643100  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:17.656886  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.143481  384505 api_server.go:166] Checking apiserver status ...
	I1002 11:54:18.143563  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:18.157184  384505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:18.616095  384505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:18.616128  384505 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:18.616142  384505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:18.616204  384505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:18.654952  384505 cri.go:89] found id: ""
	I1002 11:54:18.655033  384505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:18.674155  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:18.685052  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:18.685116  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695816  384505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:18.695844  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:18.821270  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:17.886333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.886895  384787 main.go:141] libmachine: (embed-certs-487027) Found IP for machine: 192.168.72.147
	I1002 11:54:17.886926  384787 main.go:141] libmachine: (embed-certs-487027) Reserving static IP address...
	I1002 11:54:17.886947  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has current primary IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.887365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.887396  384787 main.go:141] libmachine: (embed-certs-487027) DBG | skip adding static IP to network mk-embed-certs-487027 - found existing host DHCP lease matching {name: "embed-certs-487027", mac: "52:54:00:06:60:23", ip: "192.168.72.147"}
	I1002 11:54:17.887404  384787 main.go:141] libmachine: (embed-certs-487027) Reserved static IP address: 192.168.72.147
	I1002 11:54:17.887420  384787 main.go:141] libmachine: (embed-certs-487027) Waiting for SSH to be available...
	I1002 11:54:17.887437  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Getting to WaitForSSH function...
	I1002 11:54:17.889775  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890175  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.890214  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.890410  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH client type: external
	I1002 11:54:17.890434  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa (-rw-------)
	I1002 11:54:17.890470  384787 main.go:141] libmachine: (embed-certs-487027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:17.890502  384787 main.go:141] libmachine: (embed-certs-487027) DBG | About to run SSH command:
	I1002 11:54:17.890514  384787 main.go:141] libmachine: (embed-certs-487027) DBG | exit 0
	I1002 11:54:17.974015  384787 main.go:141] libmachine: (embed-certs-487027) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:17.974444  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetConfigRaw
	I1002 11:54:17.975209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:17.977468  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.977798  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.977837  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.978016  384787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/config.json ...
	I1002 11:54:17.978201  384787 machine.go:88] provisioning docker machine ...
	I1002 11:54:17.978220  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:17.978460  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978651  384787 buildroot.go:166] provisioning hostname "embed-certs-487027"
	I1002 11:54:17.978669  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:17.978817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:17.980872  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981298  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:17.981333  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:17.981395  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:17.981587  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981746  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:17.981885  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:17.982020  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:17.982399  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:17.982413  384787 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-487027 && echo "embed-certs-487027" | sudo tee /etc/hostname
	I1002 11:54:18.103274  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-487027
	
	I1002 11:54:18.103311  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.106230  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106654  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.106709  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.106847  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.107082  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107266  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.107400  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.107589  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.108051  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.108081  384787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-487027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-487027/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-487027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:18.222398  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:18.222431  384787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:18.222453  384787 buildroot.go:174] setting up certificates
	I1002 11:54:18.222488  384787 provision.go:83] configureAuth start
	I1002 11:54:18.222500  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetMachineName
	I1002 11:54:18.222817  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:18.225631  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.226150  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.226262  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.228719  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229096  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.229130  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.229268  384787 provision.go:138] copyHostCerts
	I1002 11:54:18.229336  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:18.229351  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:18.229399  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:18.229480  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:18.229492  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:18.229511  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:18.229563  384787 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:18.229570  384787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:18.229586  384787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:18.229630  384787 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.embed-certs-487027 san=[192.168.72.147 192.168.72.147 localhost 127.0.0.1 minikube embed-certs-487027]
	I1002 11:54:18.296130  384787 provision.go:172] copyRemoteCerts
	I1002 11:54:18.296187  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:18.296212  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.298721  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299036  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.299059  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.299181  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.299363  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.299479  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.299628  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.384449  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:18.406096  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:18.427407  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 11:54:18.448829  384787 provision.go:86] duration metric: configureAuth took 226.314252ms
	I1002 11:54:18.448858  384787 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:18.449065  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:18.449178  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.451995  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452365  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.452405  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.452596  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.452786  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.452958  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.453077  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.453213  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.453571  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.453606  384787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:18.754879  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:18.754913  384787 machine.go:91] provisioned docker machine in 776.69782ms
	I1002 11:54:18.754927  384787 start.go:300] post-start starting for "embed-certs-487027" (driver="kvm2")
	I1002 11:54:18.754941  384787 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:18.754966  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:18.755361  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:18.755392  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.758184  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758644  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.758700  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.758788  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.758981  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.759149  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.759414  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:18.847614  384787 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:18.851792  384787 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:18.851821  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:18.851911  384787 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:18.852023  384787 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:18.852152  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:18.861415  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:18.883190  384787 start.go:303] post-start completed in 128.242372ms
	I1002 11:54:18.883222  384787 fix.go:56] fixHost completed within 20.127922888s
	I1002 11:54:18.883249  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:18.885771  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886114  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:18.886141  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:18.886335  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:18.886598  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886784  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:18.886922  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:18.887111  384787 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:18.887556  384787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.147 22 <nil> <nil>}
	I1002 11:54:18.887574  384787 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:19.007352  384787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247658.948838951
	
	I1002 11:54:19.007388  384787 fix.go:206] guest clock: 1696247658.948838951
	I1002 11:54:19.007404  384787 fix.go:219] Guest: 2023-10-02 11:54:18.948838951 +0000 UTC Remote: 2023-10-02 11:54:18.883226893 +0000 UTC m=+271.237550126 (delta=65.612058ms)
	I1002 11:54:19.007464  384787 fix.go:190] guest clock delta is within tolerance: 65.612058ms
	I1002 11:54:19.007471  384787 start.go:83] releasing machines lock for "embed-certs-487027", held for 20.25221392s
	I1002 11:54:19.007510  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.007831  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:19.011020  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011386  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.011418  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.011602  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012303  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012520  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:54:19.012602  384787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:19.012660  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.012946  384787 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:19.012976  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:54:19.015652  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.015935  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016016  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016063  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016284  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016411  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:19.016439  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:19.016482  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016638  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:54:19.016653  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.016868  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:54:19.016871  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.017017  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:54:19.017199  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:54:19.124634  384787 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:19.130340  384787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:19.278814  384787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:19.284549  384787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:19.284618  384787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:19.300872  384787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:19.300896  384787 start.go:469] detecting cgroup driver to use...
	I1002 11:54:19.300984  384787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:19.314898  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:19.327762  384787 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:19.327826  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:19.341164  384787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:19.354542  384787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:19.469125  384787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:19.581195  384787 docker.go:213] disabling docker service ...
	I1002 11:54:19.581260  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:19.595222  384787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:19.607587  384787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:19.725376  384787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:19.828507  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:19.845782  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:19.868464  384787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:19.868530  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.881554  384787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:19.881633  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.894090  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.905922  384787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:19.918336  384787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:19.931259  384787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:19.939861  384787 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:19.939925  384787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:19.954089  384787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:19.966438  384787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:20.124666  384787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:20.329505  384787 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:20.329602  384787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:20.336428  384787 start.go:537] Will wait 60s for crictl version
	I1002 11:54:20.336499  384787 ssh_runner.go:195] Run: which crictl
	I1002 11:54:20.343269  384787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:20.386249  384787 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:20.386331  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.429634  384787 ssh_runner.go:195] Run: crio --version
	I1002 11:54:20.476699  384787 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:20.478035  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetIP
	I1002 11:54:20.480720  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481028  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:54:20.481054  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:54:20.481230  384787 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:20.485387  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:20.496957  384787 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:20.497028  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:20.539655  384787 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:20.539731  384787 ssh_runner.go:195] Run: which lz4
	I1002 11:54:20.543869  384787 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:20.548080  384787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:20.548112  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:22.411067  384787 crio.go:444] Took 1.867223 seconds to copy over tarball
	I1002 11:54:22.411155  384787 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:20.416319  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting to get IP...
	I1002 11:54:20.417168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.417613  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.417539  385761 retry.go:31] will retry after 211.341658ms: waiting for machine to come up
	I1002 11:54:20.631097  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.631841  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.632011  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.631972  385761 retry.go:31] will retry after 257.651992ms: waiting for machine to come up
	I1002 11:54:20.891519  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892077  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:20.892111  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:20.892047  385761 retry.go:31] will retry after 295.599576ms: waiting for machine to come up
	I1002 11:54:21.189739  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190333  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.190389  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.190275  385761 retry.go:31] will retry after 532.182463ms: waiting for machine to come up
	I1002 11:54:21.723822  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724414  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:21.724443  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:21.724314  385761 retry.go:31] will retry after 576.235756ms: waiting for machine to come up
	I1002 11:54:22.301975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302566  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:22.302600  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:22.302479  385761 retry.go:31] will retry after 913.441142ms: waiting for machine to come up
	I1002 11:54:23.217419  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217905  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:23.217943  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:23.217839  385761 retry.go:31] will retry after 1.089960204s: waiting for machine to come up
	I1002 11:54:19.625761  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.857853  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:19.977490  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:20.080170  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:20.080294  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.097093  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:20.611090  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.110857  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:21.610499  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.111420  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:22.138171  384505 api_server.go:72] duration metric: took 2.057999603s to wait for apiserver process to appear ...
	I1002 11:54:22.138201  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:22.138224  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:25.604442  384787 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.193244457s)
	I1002 11:54:25.604543  384787 crio.go:451] Took 3.193443 seconds to extract the tarball
	I1002 11:54:25.604568  384787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:25.660515  384787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:25.723308  384787 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:25.723339  384787 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:25.723436  384787 ssh_runner.go:195] Run: crio config
	I1002 11:54:25.781690  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:25.781722  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:25.781748  384787 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:25.781775  384787 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-487027 NodeName:embed-certs-487027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:25.782020  384787 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-487027"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:25.782125  384787 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-487027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:54:25.782183  384787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:25.791322  384787 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:25.791398  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:25.799709  384787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 11:54:25.818900  384787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:25.836913  384787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1002 11:54:25.856201  384787 ssh_runner.go:195] Run: grep 192.168.72.147	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:25.859962  384787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:25.872776  384787 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027 for IP: 192.168.72.147
	I1002 11:54:25.872818  384787 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:25.873061  384787 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:25.873125  384787 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:25.873225  384787 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/client.key
	I1002 11:54:25.873312  384787 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key.b24df18b
	I1002 11:54:25.873375  384787 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key
	I1002 11:54:25.873530  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:25.873590  384787 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:25.873602  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:25.873633  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:25.873667  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:25.873702  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:25.873757  384787 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:25.874732  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:25.901588  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:25.929381  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:25.955358  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/embed-certs-487027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:25.980414  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:26.008652  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:26.038061  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:26.067828  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:26.098717  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:26.131030  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:26.162989  384787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:26.189458  384787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:26.206791  384787 ssh_runner.go:195] Run: openssl version
	I1002 11:54:26.214436  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:26.226064  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231428  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.231504  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:26.238070  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:26.252779  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:26.267263  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272245  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.272316  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:26.278088  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:26.289430  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:26.300788  384787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305731  384787 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.305812  384787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:26.311712  384787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:26.322855  384787 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:26.328688  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:26.336570  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:26.344412  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:26.350583  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:26.356815  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:26.364674  384787 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:26.372219  384787 kubeadm.go:404] StartCluster: {Name:embed-certs-487027 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-487027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:26.372341  384787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:26.372397  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:26.424018  384787 cri.go:89] found id: ""
	I1002 11:54:26.424131  384787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:26.435493  384787 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:26.435520  384787 kubeadm.go:636] restartCluster start
	I1002 11:54:26.435583  384787 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:26.447429  384787 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.448848  384787 kubeconfig.go:92] found "embed-certs-487027" server: "https://192.168.72.147:8443"
	I1002 11:54:26.452474  384787 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:26.462854  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.462924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.475723  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.475751  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.475803  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:26.488962  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:26.989693  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:26.989776  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.002889  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:27.489487  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.489589  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:27.503912  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:24.308867  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309362  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:24.309392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:24.309326  385761 retry.go:31] will retry after 1.381170872s: waiting for machine to come up
	I1002 11:54:25.691931  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692285  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:25.692386  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:25.692267  385761 retry.go:31] will retry after 1.748966707s: waiting for machine to come up
	I1002 11:54:27.442708  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443145  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:27.443171  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:27.443107  385761 retry.go:31] will retry after 2.105420589s: waiting for machine to come up
	I1002 11:54:27.138701  384505 api_server.go:269] stopped: https://192.168.83.82:8443/healthz: Get "https://192.168.83.82:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 11:54:27.138757  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.249499  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:28.249540  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:28.750389  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:28.756351  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:28.756390  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.250308  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.257228  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 11:54:29.257264  384505 api_server.go:103] status: https://192.168.83.82:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 11:54:29.750123  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 11:54:29.758475  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 11:54:29.769049  384505 api_server.go:141] control plane version: v1.16.0
	I1002 11:54:29.769079  384505 api_server.go:131] duration metric: took 7.630868963s to wait for apiserver health ...
	I1002 11:54:29.769098  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:54:29.769107  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:29.770969  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:27.989735  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:27.989861  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.007059  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.489495  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.489605  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:28.505845  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:28.989879  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:28.989963  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.004220  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.489847  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.489949  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:29.502986  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.989170  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:29.989264  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.006850  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.489389  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.489504  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:30.502094  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:30.989302  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:30.989399  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.005902  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.489967  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.490080  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:31.503748  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:31.989317  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:31.989405  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.003288  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:32.489803  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.489924  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:32.506744  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:29.550027  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550550  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:29.550585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:29.550488  385761 retry.go:31] will retry after 2.509962026s: waiting for machine to come up
	I1002 11:54:32.063392  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063862  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:32.063887  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:32.063834  385761 retry.go:31] will retry after 2.845339865s: waiting for machine to come up
	I1002 11:54:29.772611  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:29.786551  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:29.807894  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:29.818837  384505 system_pods.go:59] 7 kube-system pods found
	I1002 11:54:29.818890  384505 system_pods.go:61] "coredns-5644d7b6d9-9xdpq" [2d10c772-e2f0-4bfc-9795-0721f8bab31c] Running
	I1002 11:54:29.818901  384505 system_pods.go:61] "etcd-old-k8s-version-749860" [5826895a-f14d-43ab-9f22-edad964d4a8e] Running
	I1002 11:54:29.818910  384505 system_pods.go:61] "kube-apiserver-old-k8s-version-749860" [3418ba32-aa28-4587-a231-b1f218181e71] Running
	I1002 11:54:29.818919  384505 system_pods.go:61] "kube-controller-manager-old-k8s-version-749860" [e42ff4c0-2ec4-45b9-8189-6a225c79f5c6] Running
	I1002 11:54:29.818927  384505 system_pods.go:61] "kube-proxy-gkhxb" [b3675678-e1cf-4d86-82d9-9e068bd1ba19] Running
	I1002 11:54:29.818939  384505 system_pods.go:61] "kube-scheduler-old-k8s-version-749860" [53a1c8a7-ec6d-4d47-a980-8cfab71ad467] Running
	I1002 11:54:29.818948  384505 system_pods.go:61] "storage-provisioner" [e73d6f24-1392-40ca-b37d-03c035734d1d] Running
	I1002 11:54:29.818964  384505 system_pods.go:74] duration metric: took 11.044895ms to wait for pod list to return data ...
	I1002 11:54:29.818980  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:29.822392  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:29.822455  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:29.822472  384505 node_conditions.go:105] duration metric: took 3.48317ms to run NodePressure ...
	I1002 11:54:29.822520  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:30.106960  384505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:30.111692  384505 retry.go:31] will retry after 218.727225ms: kubelet not initialised
	I1002 11:54:30.336456  384505 retry.go:31] will retry after 524.868139ms: kubelet not initialised
	I1002 11:54:30.867554  384505 retry.go:31] will retry after 427.897694ms: kubelet not initialised
	I1002 11:54:31.301616  384505 retry.go:31] will retry after 722.780158ms: kubelet not initialised
	I1002 11:54:32.029512  384505 retry.go:31] will retry after 1.205429819s: kubelet not initialised
	I1002 11:54:33.253735  384505 retry.go:31] will retry after 1.476521325s: kubelet not initialised
	I1002 11:54:32.989607  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:32.989718  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.004745  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.489141  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.489215  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:33.506018  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:33.990120  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:33.990217  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.005050  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.489520  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.489608  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:34.501965  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:34.989481  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:34.989584  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.002635  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.489123  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.489199  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:35.502995  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:35.989474  384787 api_server.go:166] Checking apiserver status ...
	I1002 11:54:35.989565  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:36.003010  384787 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:36.463582  384787 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:36.463614  384787 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:36.463628  384787 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:36.463689  384787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:36.503915  384787 cri.go:89] found id: ""
	I1002 11:54:36.503982  384787 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:36.519603  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:36.529026  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:36.529086  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538424  384787 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:36.538451  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:36.670492  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:34.910513  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911092  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | unable to find current IP address of domain default-k8s-diff-port-777999 in network mk-default-k8s-diff-port-777999
	I1002 11:54:34.911136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | I1002 11:54:34.911030  385761 retry.go:31] will retry after 3.250805502s: waiting for machine to come up
	I1002 11:54:38.163585  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Found IP for machine: 192.168.61.251
	I1002 11:54:38.164104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has current primary IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.164124  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserving static IP address...
	I1002 11:54:38.164549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.164588  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | skip adding static IP to network mk-default-k8s-diff-port-777999 - found existing host DHCP lease matching {name: "default-k8s-diff-port-777999", mac: "52:54:00:15:a7:c9", ip: "192.168.61.251"}
	I1002 11:54:38.164604  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Reserved static IP address: 192.168.61.251
	I1002 11:54:38.164623  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Waiting for SSH to be available...
	I1002 11:54:38.164639  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Getting to WaitForSSH function...
	I1002 11:54:38.166901  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167279  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.167313  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.167579  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH client type: external
	I1002 11:54:38.167610  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa (-rw-------)
	I1002 11:54:38.167649  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:38.167671  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | About to run SSH command:
	I1002 11:54:38.167694  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | exit 0
	I1002 11:54:38.274617  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:38.275081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetConfigRaw
	I1002 11:54:38.275836  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.278750  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279150  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.279193  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.279391  384965 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/config.json ...
	I1002 11:54:38.279621  384965 machine.go:88] provisioning docker machine ...
	I1002 11:54:38.279646  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:38.279886  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280069  384965 buildroot.go:166] provisioning hostname "default-k8s-diff-port-777999"
	I1002 11:54:38.280094  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.280253  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.282736  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283104  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.283136  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.283230  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.283399  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283578  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.283733  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.283892  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.284295  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.284312  384965 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-777999 && echo "default-k8s-diff-port-777999" | sudo tee /etc/hostname
	I1002 11:54:38.443082  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-777999
	
	I1002 11:54:38.443200  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.446493  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447061  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.447106  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.447288  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.447549  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447737  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.447899  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.448132  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.448554  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.448586  384965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-777999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-777999/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-777999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:38.594884  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:38.594920  384965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:38.594956  384965 buildroot.go:174] setting up certificates
	I1002 11:54:38.594975  384965 provision.go:83] configureAuth start
	I1002 11:54:38.594993  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetMachineName
	I1002 11:54:38.595325  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:38.597718  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598053  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.598088  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.598217  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.600751  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601065  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.601099  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.601219  384965 provision.go:138] copyHostCerts
	I1002 11:54:38.601300  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:38.601316  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:38.601393  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:38.601520  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:38.601534  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:38.601565  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:38.601634  384965 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:38.601644  384965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:38.601670  384965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:38.601728  384965 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-777999 san=[192.168.61.251 192.168.61.251 localhost 127.0.0.1 minikube default-k8s-diff-port-777999]
	I1002 11:54:38.706714  384965 provision.go:172] copyRemoteCerts
	I1002 11:54:38.706783  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:38.706847  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.709075  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709491  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.709547  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.709658  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.709903  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.710087  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.710216  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:38.803103  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 11:54:38.825916  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:38.847881  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 11:54:38.873772  384965 provision.go:86] duration metric: configureAuth took 278.777931ms
	I1002 11:54:38.873804  384965 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:38.874066  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:38.874154  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:38.876864  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877269  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:38.877304  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:38.877453  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:38.877666  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877797  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:38.877936  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:38.878087  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:38.878441  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:38.878469  384965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:54:34.736594  384505 retry.go:31] will retry after 1.866771295s: kubelet not initialised
	I1002 11:54:36.609977  384505 retry.go:31] will retry after 4.83087592s: kubelet not initialised
	I1002 11:54:39.495298  384344 start.go:369] acquired machines lock for "no-preload-304121" in 55.626389891s
	I1002 11:54:39.495355  384344 start.go:96] Skipping create...Using existing machine configuration
	I1002 11:54:39.495364  384344 fix.go:54] fixHost starting: 
	I1002 11:54:39.495800  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:54:39.495839  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:54:39.518491  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I1002 11:54:39.518893  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:54:39.519407  384344 main.go:141] libmachine: Using API Version  1
	I1002 11:54:39.519432  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:54:39.519757  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:54:39.519941  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:39.520099  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 11:54:39.521857  384344 fix.go:102] recreateIfNeeded on no-preload-304121: state=Stopped err=<nil>
	I1002 11:54:39.521885  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	W1002 11:54:39.522058  384344 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 11:54:39.524119  384344 out.go:177] * Restarting existing kvm2 VM for "no-preload-304121" ...
	I1002 11:54:39.215761  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:54:39.215794  384965 machine.go:91] provisioned docker machine in 936.155542ms
	I1002 11:54:39.215807  384965 start.go:300] post-start starting for "default-k8s-diff-port-777999" (driver="kvm2")
	I1002 11:54:39.215822  384965 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:54:39.215848  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.216265  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:54:39.216305  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.219032  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219387  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.219418  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.219542  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.219748  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.219910  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.220054  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.317075  384965 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:54:39.321405  384965 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:54:39.321429  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:54:39.321505  384965 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:54:39.321599  384965 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:54:39.321716  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:54:39.330980  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:39.357830  384965 start.go:303] post-start completed in 142.005546ms
	I1002 11:54:39.357863  384965 fix.go:56] fixHost completed within 20.350127508s
	I1002 11:54:39.357900  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.360232  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360561  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.360598  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.360768  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.360966  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361139  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.361264  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.361425  384965 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:39.361918  384965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I1002 11:54:39.361939  384965 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:54:39.495129  384965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247679.435720520
	
	I1002 11:54:39.495155  384965 fix.go:206] guest clock: 1696247679.435720520
	I1002 11:54:39.495166  384965 fix.go:219] Guest: 2023-10-02 11:54:39.43572052 +0000 UTC Remote: 2023-10-02 11:54:39.357871423 +0000 UTC m=+265.343763085 (delta=77.849097ms)
	I1002 11:54:39.495194  384965 fix.go:190] guest clock delta is within tolerance: 77.849097ms
	I1002 11:54:39.495206  384965 start.go:83] releasing machines lock for "default-k8s-diff-port-777999", held for 20.487515438s
	I1002 11:54:39.495242  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.495652  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:39.498667  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499055  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.499114  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.499370  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.499891  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500060  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:54:39.500132  384965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:54:39.500199  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.500539  384965 ssh_runner.go:195] Run: cat /version.json
	I1002 11:54:39.500565  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:54:39.503388  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503580  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503885  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.503917  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.503995  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504000  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:39.504081  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:39.504281  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:54:39.504297  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504459  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:54:39.504682  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:54:39.504680  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.504825  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:54:39.623582  384965 ssh_runner.go:195] Run: systemctl --version
	I1002 11:54:39.631181  384965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:54:39.787298  384965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:54:39.795202  384965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:54:39.795303  384965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:54:39.816471  384965 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:54:39.816495  384965 start.go:469] detecting cgroup driver to use...
	I1002 11:54:39.816567  384965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:54:39.836594  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:54:39.852798  384965 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:54:39.852911  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:54:39.868676  384965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:54:39.885480  384965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:54:40.003441  384965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:54:40.146812  384965 docker.go:213] disabling docker service ...
	I1002 11:54:40.146916  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:54:40.163451  384965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:54:40.178327  384965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:54:40.339579  384965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:54:40.463502  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:54:40.476402  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:54:40.499021  384965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:54:40.499117  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.511680  384965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:54:40.511752  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.524364  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.536675  384965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:54:40.549326  384965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:54:40.559447  384965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:54:40.570086  384965 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:54:40.570157  384965 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:54:40.582938  384965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:54:40.594250  384965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:54:40.739528  384965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:54:40.964248  384965 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:54:40.964336  384965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:54:40.969637  384965 start.go:537] Will wait 60s for crictl version
	I1002 11:54:40.969696  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:54:40.974270  384965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:54:41.016986  384965 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:54:41.017121  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.061313  384965 ssh_runner.go:195] Run: crio --version
	I1002 11:54:41.112139  384965 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:39.525634  384344 main.go:141] libmachine: (no-preload-304121) Calling .Start
	I1002 11:54:39.525802  384344 main.go:141] libmachine: (no-preload-304121) Ensuring networks are active...
	I1002 11:54:39.526566  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network default is active
	I1002 11:54:39.526860  384344 main.go:141] libmachine: (no-preload-304121) Ensuring network mk-no-preload-304121 is active
	I1002 11:54:39.527227  384344 main.go:141] libmachine: (no-preload-304121) Getting domain xml...
	I1002 11:54:39.527942  384344 main.go:141] libmachine: (no-preload-304121) Creating domain...
	I1002 11:54:40.973483  384344 main.go:141] libmachine: (no-preload-304121) Waiting to get IP...
	I1002 11:54:40.974731  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:40.975262  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:40.975359  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:40.975266  385933 retry.go:31] will retry after 231.149062ms: waiting for machine to come up
	I1002 11:54:41.207806  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.208486  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.208522  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.208461  385933 retry.go:31] will retry after 390.353931ms: waiting for machine to come up
	I1002 11:54:37.939830  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.269286101s)
	I1002 11:54:37.939876  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.149675  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.246179  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:38.327794  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:38.327884  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.343240  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:38.855719  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.355428  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:39.854862  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.355228  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.855597  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:40.891530  384787 api_server.go:72] duration metric: took 2.563733499s to wait for apiserver process to appear ...
	I1002 11:54:40.891560  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:54:40.891581  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892226  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:40.892274  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:40.892799  384787 api_server.go:269] stopped: https://192.168.72.147:8443/healthz: Get "https://192.168.72.147:8443/healthz": dial tcp 192.168.72.147:8443: connect: connection refused
	I1002 11:54:41.393747  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:41.113638  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetIP
	I1002 11:54:41.116930  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117360  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:54:41.117396  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:54:41.117684  384965 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 11:54:41.122622  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:41.138418  384965 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:54:41.138496  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:41.189380  384965 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:54:41.189465  384965 ssh_runner.go:195] Run: which lz4
	I1002 11:54:41.194945  384965 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 11:54:41.200215  384965 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 11:54:41.200254  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1002 11:54:43.164279  384965 crio.go:444] Took 1.969380 seconds to copy over tarball
	I1002 11:54:43.164370  384965 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 11:54:41.447247  384505 retry.go:31] will retry after 8.441231321s: kubelet not initialised
	I1002 11:54:41.600866  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.601691  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.601729  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.601345  385933 retry.go:31] will retry after 381.859851ms: waiting for machine to come up
	I1002 11:54:41.985107  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:41.986545  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:41.986572  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:41.986434  385933 retry.go:31] will retry after 606.51751ms: waiting for machine to come up
	I1002 11:54:42.594443  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:42.595004  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:42.595031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:42.594935  385933 retry.go:31] will retry after 474.689172ms: waiting for machine to come up
	I1002 11:54:43.071618  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:43.072140  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:43.072196  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:43.072085  385933 retry.go:31] will retry after 931.163736ms: waiting for machine to come up
	I1002 11:54:44.005228  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:44.005899  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:44.005927  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:44.005852  385933 retry.go:31] will retry after 1.133426769s: waiting for machine to come up
	I1002 11:54:45.141320  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:45.142068  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:45.142099  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:45.141965  385933 retry.go:31] will retry after 1.458717431s: waiting for machine to come up
	I1002 11:54:45.416658  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.416697  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.416713  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.489874  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:54:45.489918  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:54:45.893115  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:45.901437  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:45.901477  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.393114  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.399302  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:54:46.399337  384787 api_server.go:103] status: https://192.168.72.147:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:54:46.892875  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:54:46.898524  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:54:46.908311  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:54:46.908342  384787 api_server.go:131] duration metric: took 6.016772427s to wait for apiserver health ...
	I1002 11:54:46.908354  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.908364  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:47.225292  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:54:47.481617  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:54:47.499011  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:54:47.535238  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:54:46.620757  384965 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.456345361s)
	I1002 11:54:46.620801  384965 crio.go:451] Took 3.456492 seconds to extract the tarball
	I1002 11:54:46.620814  384965 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 11:54:46.677550  384965 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:54:46.810235  384965 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 11:54:46.810265  384965 cache_images.go:84] Images are preloaded, skipping loading
	I1002 11:54:46.810334  384965 ssh_runner.go:195] Run: crio config
	I1002 11:54:46.875355  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:54:46.875378  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:54:46.875397  384965 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:54:46.875417  384965 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.251 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-777999 NodeName:default-k8s-diff-port-777999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:54:46.875588  384965 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.251
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-777999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:54:46.875674  384965 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-777999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 11:54:46.875737  384965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:54:46.886943  384965 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:54:46.887034  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:54:46.898434  384965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1002 11:54:46.917830  384965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:54:46.936297  384965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1002 11:54:46.954413  384965 ssh_runner.go:195] Run: grep 192.168.61.251	control-plane.minikube.internal$ /etc/hosts
	I1002 11:54:46.958832  384965 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.251	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:54:46.970802  384965 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999 for IP: 192.168.61.251
	I1002 11:54:46.970845  384965 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:54:46.971031  384965 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:54:46.971093  384965 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:54:46.971194  384965 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/client.key
	I1002 11:54:46.971286  384965 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key.04d51ca9
	I1002 11:54:46.971341  384965 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key
	I1002 11:54:46.971469  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:54:46.971507  384965 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:54:46.971524  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:54:46.971572  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:54:46.971614  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:54:46.971652  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:54:46.971713  384965 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:54:46.972319  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:54:46.998880  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:54:47.024639  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:54:47.048695  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/default-k8s-diff-port-777999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 11:54:47.076815  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:54:47.102469  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:54:47.128913  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:54:47.155863  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:54:47.185058  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:54:47.212289  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:54:47.236848  384965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:54:47.261485  384965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:54:47.278535  384965 ssh_runner.go:195] Run: openssl version
	I1002 11:54:47.284888  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:54:47.296352  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301262  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.301331  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:54:47.307136  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:54:47.317650  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:54:47.328371  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333341  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.333421  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:54:47.339268  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:54:47.349646  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:54:47.360575  384965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367279  384965 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.367346  384965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:54:47.374693  384965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:54:47.386302  384965 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:54:47.391448  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:54:47.397407  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:54:47.403122  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:54:47.408810  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:54:47.414684  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:54:47.420606  384965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:54:47.426568  384965 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-777999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-777999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:54:47.426702  384965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:54:47.426747  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:47.467190  384965 cri.go:89] found id: ""
	I1002 11:54:47.467275  384965 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:54:47.478921  384965 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:54:47.478944  384965 kubeadm.go:636] restartCluster start
	I1002 11:54:47.479016  384965 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:54:47.492971  384965 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.494091  384965 kubeconfig.go:92] found "default-k8s-diff-port-777999" server: "https://192.168.61.251:8444"
	I1002 11:54:47.498738  384965 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:54:47.510376  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.510454  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.523397  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:47.523417  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:47.523459  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:47.536893  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.037653  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.037746  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.055280  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:48.537887  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:48.537979  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:48.555759  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.037998  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.038108  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:46.602496  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:46.654672  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:46.654707  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:46.602962  385933 retry.go:31] will retry after 1.25268648s: waiting for machine to come up
	I1002 11:54:47.857506  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:47.858115  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:47.858149  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:47.858061  385933 retry.go:31] will retry after 2.104571101s: waiting for machine to come up
	I1002 11:54:49.964533  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:49.964997  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:49.965031  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:49.964942  385933 retry.go:31] will retry after 2.047553587s: waiting for machine to come up
	I1002 11:54:47.766443  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:54:47.766485  384787 system_pods.go:61] "coredns-5dd5756b68-6glsj" [ad7c852a-cdac-4ada-99da-4115b447f00c] Running
	I1002 11:54:47.766498  384787 system_pods.go:61] "etcd-embed-certs-487027" [78f5c4ed-7baf-4339-811f-c25e934de0c4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:54:47.766516  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [275bb65c-b955-43d9-839b-6439e8c19662] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:54:47.766524  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [d798407e-abe2-4b70-952e-1274fff006bc] Running
	I1002 11:54:47.766532  384787 system_pods.go:61] "kube-proxy-wjjtv" [54e35e5e-7045-497f-8fef-322fe0e43afd] Running
	I1002 11:54:47.766543  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [62c61cf2-f18e-47a9-9729-20e87fe02c89] Running
	I1002 11:54:47.766556  384787 system_pods.go:61] "metrics-server-57f55c9bc5-d8c7b" [71c33b74-c942-403a-a1d4-2b852f0070a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:54:47.766568  384787 system_pods.go:61] "storage-provisioner" [0a8120e1-c879-4726-abab-f95a4a3c8721] Running
	I1002 11:54:47.766581  384787 system_pods.go:74] duration metric: took 231.314062ms to wait for pod list to return data ...
	I1002 11:54:47.766593  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:54:48.206673  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:54:48.206710  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:54:48.206722  384787 node_conditions.go:105] duration metric: took 440.12142ms to run NodePressure ...
	I1002 11:54:48.206743  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:48.736269  384787 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754061  384787 kubeadm.go:787] kubelet initialised
	I1002 11:54:48.754094  384787 kubeadm.go:788] duration metric: took 17.795803ms waiting for restarted kubelet to initialise ...
	I1002 11:54:48.754106  384787 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:54:48.763480  384787 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:50.815900  384787 pod_ready.go:102] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:51.815729  384787 pod_ready.go:92] pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:51.815752  384787 pod_ready.go:81] duration metric: took 3.052241738s waiting for pod "coredns-5dd5756b68-6glsj" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:51.815761  384787 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:49.055614  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:49.537412  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:49.537517  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:49.554838  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.037334  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.037460  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.050213  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:50.537454  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:50.537586  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:50.551733  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.037281  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.037394  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.055077  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:51.537591  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:51.537672  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:51.555315  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.037929  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.038038  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.052852  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:52.537358  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:52.537435  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:52.553169  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.037814  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.037913  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.055176  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:53.537764  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:53.537869  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:53.554864  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.037941  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.038052  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:49.895219  384505 retry.go:31] will retry after 9.020637322s: kubelet not initialised
	I1002 11:54:52.015240  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:52.015623  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:52.015646  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:52.015594  385933 retry.go:31] will retry after 3.361214112s: waiting for machine to come up
	I1002 11:54:55.378293  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:55.378805  384344 main.go:141] libmachine: (no-preload-304121) DBG | unable to find current IP address of domain no-preload-304121 in network mk-no-preload-304121
	I1002 11:54:55.378853  384344 main.go:141] libmachine: (no-preload-304121) DBG | I1002 11:54:55.378772  385933 retry.go:31] will retry after 3.33521217s: waiting for machine to come up
	I1002 11:54:53.337930  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.337967  384787 pod_ready.go:81] duration metric: took 1.522199476s waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.337979  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344756  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:53.344782  384787 pod_ready.go:81] duration metric: took 6.79552ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:53.344791  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:55.549698  384787 pod_ready.go:102] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:54:57.049146  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.049177  384787 pod_ready.go:81] duration metric: took 3.704379238s waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.049192  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055125  384787 pod_ready.go:92] pod "kube-proxy-wjjtv" in "kube-system" namespace has status "Ready":"True"
	I1002 11:54:57.055144  384787 pod_ready.go:81] duration metric: took 5.945156ms waiting for pod "kube-proxy-wjjtv" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:57.055152  384787 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	W1002 11:54:54.056234  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:54.537821  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:54.537918  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:54.552634  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.037141  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.037220  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.052963  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:55.537432  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:55.537531  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:55.552525  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.036986  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.037074  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.049750  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:56.537060  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:56.537144  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:56.548686  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.037931  384965 api_server.go:166] Checking apiserver status ...
	I1002 11:54:57.038029  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:54:57.049828  384965 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:54:57.511461  384965 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:54:57.511495  384965 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:54:57.511510  384965 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:54:57.511571  384965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:54:57.552784  384965 cri.go:89] found id: ""
	I1002 11:54:57.552866  384965 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:54:57.567867  384965 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:54:57.578391  384965 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:54:57.578474  384965 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587065  384965 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:54:57.587086  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:57.717787  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.423038  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.607300  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.687023  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:54:58.778674  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:54:58.778770  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.794920  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:58.923574  384505 retry.go:31] will retry after 19.662203801s: kubelet not initialised
	I1002 11:54:58.715622  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716211  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has current primary IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.716229  384344 main.go:141] libmachine: (no-preload-304121) Found IP for machine: 192.168.39.143
	I1002 11:54:58.716248  384344 main.go:141] libmachine: (no-preload-304121) Reserving static IP address...
	I1002 11:54:58.716781  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.716823  384344 main.go:141] libmachine: (no-preload-304121) Reserved static IP address: 192.168.39.143
	I1002 11:54:58.716845  384344 main.go:141] libmachine: (no-preload-304121) DBG | skip adding static IP to network mk-no-preload-304121 - found existing host DHCP lease matching {name: "no-preload-304121", mac: "52:54:00:11:b9:ea", ip: "192.168.39.143"}
	I1002 11:54:58.716864  384344 main.go:141] libmachine: (no-preload-304121) DBG | Getting to WaitForSSH function...
	I1002 11:54:58.716875  384344 main.go:141] libmachine: (no-preload-304121) Waiting for SSH to be available...
	I1002 11:54:58.719551  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.719991  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.720031  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.720236  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH client type: external
	I1002 11:54:58.720273  384344 main.go:141] libmachine: (no-preload-304121) DBG | Using SSH private key: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa (-rw-------)
	I1002 11:54:58.720309  384344 main.go:141] libmachine: (no-preload-304121) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 11:54:58.720329  384344 main.go:141] libmachine: (no-preload-304121) DBG | About to run SSH command:
	I1002 11:54:58.720355  384344 main.go:141] libmachine: (no-preload-304121) DBG | exit 0
	I1002 11:54:58.866583  384344 main.go:141] libmachine: (no-preload-304121) DBG | SSH cmd err, output: <nil>: 
	I1002 11:54:58.866916  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetConfigRaw
	I1002 11:54:58.867637  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:58.870844  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871270  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.871305  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.871677  384344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/config.json ...
	I1002 11:54:58.871886  384344 machine.go:88] provisioning docker machine ...
	I1002 11:54:58.871906  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:54:58.872159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872343  384344 buildroot.go:166] provisioning hostname "no-preload-304121"
	I1002 11:54:58.872370  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:58.872566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:58.875795  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876215  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:58.876252  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:58.876420  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:58.876592  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876766  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:58.876935  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:58.877113  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:58.877512  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:58.877528  384344 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-304121 && echo "no-preload-304121" | sudo tee /etc/hostname
	I1002 11:54:59.032306  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-304121
	
	I1002 11:54:59.032336  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.035842  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036373  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.036412  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.036749  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.036953  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037145  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.037313  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.037564  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.038035  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.038064  384344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-304121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-304121/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-304121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 11:54:59.175880  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 11:54:59.175910  384344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17340-332611/.minikube CaCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17340-332611/.minikube}
	I1002 11:54:59.175933  384344 buildroot.go:174] setting up certificates
	I1002 11:54:59.175945  384344 provision.go:83] configureAuth start
	I1002 11:54:59.175957  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetMachineName
	I1002 11:54:59.176253  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:54:59.179169  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179541  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.179577  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.179797  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.182011  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182418  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.182451  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.182653  384344 provision.go:138] copyHostCerts
	I1002 11:54:59.182718  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem, removing ...
	I1002 11:54:59.182732  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem
	I1002 11:54:59.182807  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/ca.pem (1082 bytes)
	I1002 11:54:59.182919  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem, removing ...
	I1002 11:54:59.182931  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem
	I1002 11:54:59.182963  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/cert.pem (1123 bytes)
	I1002 11:54:59.183050  384344 exec_runner.go:144] found /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem, removing ...
	I1002 11:54:59.183060  384344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem
	I1002 11:54:59.183088  384344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17340-332611/.minikube/key.pem (1675 bytes)
	I1002 11:54:59.183174  384344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem org=jenkins.no-preload-304121 san=[192.168.39.143 192.168.39.143 localhost 127.0.0.1 minikube no-preload-304121]
	I1002 11:54:59.492171  384344 provision.go:172] copyRemoteCerts
	I1002 11:54:59.492239  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 11:54:59.492266  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.495249  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495698  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.495746  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.495900  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.496143  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.496299  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.496460  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:54:59.594538  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 11:54:59.625319  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 11:54:59.652745  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 11:54:59.676895  384344 provision.go:86] duration metric: configureAuth took 500.931279ms
	I1002 11:54:59.676930  384344 buildroot.go:189] setting minikube options for container-runtime
	I1002 11:54:59.677160  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:54:59.677259  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:54:59.680393  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.680730  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:54:59.680764  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:54:59.681190  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:54:59.681491  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681698  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:54:59.681875  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:54:59.682112  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:54:59.682651  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:54:59.682684  384344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 11:55:00.029184  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 11:55:00.029213  384344 machine.go:91] provisioned docker machine in 1.157312136s
	I1002 11:55:00.029226  384344 start.go:300] post-start starting for "no-preload-304121" (driver="kvm2")
	I1002 11:55:00.029240  384344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 11:55:00.029296  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.029683  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 11:55:00.029722  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.032977  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033456  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.033488  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.033677  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.033919  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.034136  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.034351  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.137946  384344 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 11:55:00.144169  384344 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 11:55:00.144209  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/addons for local assets ...
	I1002 11:55:00.144291  384344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17340-332611/.minikube/files for local assets ...
	I1002 11:55:00.144405  384344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem -> 3398652.pem in /etc/ssl/certs
	I1002 11:55:00.144609  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 11:55:00.157898  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:00.186547  384344 start.go:303] post-start completed in 157.300734ms
	I1002 11:55:00.186580  384344 fix.go:56] fixHost completed within 20.691216247s
	I1002 11:55:00.186609  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.189905  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190374  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.190411  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.190718  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.190940  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191159  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.191335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.191494  384344 main.go:141] libmachine: Using SSH client type: native
	I1002 11:55:00.191981  384344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1002 11:55:00.191996  384344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 11:55:00.328123  384344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696247700.270150690
	
	I1002 11:55:00.328155  384344 fix.go:206] guest clock: 1696247700.270150690
	I1002 11:55:00.328166  384344 fix.go:219] Guest: 2023-10-02 11:55:00.27015069 +0000 UTC Remote: 2023-10-02 11:55:00.186584697 +0000 UTC m=+358.877281851 (delta=83.565993ms)
	I1002 11:55:00.328193  384344 fix.go:190] guest clock delta is within tolerance: 83.565993ms
	I1002 11:55:00.328207  384344 start.go:83] releasing machines lock for "no-preload-304121", held for 20.832874678s
	I1002 11:55:00.328234  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.328584  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:00.331898  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332432  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.332468  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.332651  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333263  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333480  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 11:55:00.333586  384344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 11:55:00.333647  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.333895  384344 ssh_runner.go:195] Run: cat /version.json
	I1002 11:55:00.333943  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 11:55:00.336673  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.336920  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337021  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337083  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337207  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337399  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.337487  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:00.337518  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:00.337566  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.337642  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 11:55:00.337734  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.337835  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 11:55:00.338131  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 11:55:00.338307  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 11:55:00.427708  384344 ssh_runner.go:195] Run: systemctl --version
	I1002 11:55:00.456367  384344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 11:55:00.604389  384344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 11:55:00.612859  384344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 11:55:00.612968  384344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 11:55:00.627986  384344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 11:55:00.628056  384344 start.go:469] detecting cgroup driver to use...
	I1002 11:55:00.628128  384344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 11:55:00.643670  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 11:55:00.656987  384344 docker.go:197] disabling cri-docker service (if available) ...
	I1002 11:55:00.657058  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 11:55:00.669708  384344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 11:55:00.682586  384344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 11:55:00.790044  384344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 11:55:00.913634  384344 docker.go:213] disabling docker service ...
	I1002 11:55:00.913717  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 11:55:00.926496  384344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 11:55:00.938769  384344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 11:55:01.045413  384344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 11:55:01.169133  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 11:55:01.182168  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 11:55:01.201850  384344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 11:55:01.201926  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.214874  384344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 11:55:01.214972  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.225123  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.237560  384344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 11:55:01.247898  384344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 11:55:01.260797  384344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 11:55:01.271528  384344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 11:55:01.271602  384344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 11:55:01.285906  384344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 11:55:01.297623  384344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 11:55:01.429828  384344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 11:55:01.617340  384344 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 11:55:01.617486  384344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 11:55:01.622871  384344 start.go:537] Will wait 60s for crictl version
	I1002 11:55:01.622942  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:01.627257  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 11:55:01.674032  384344 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1002 11:55:01.674130  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.726822  384344 ssh_runner.go:195] Run: crio --version
	I1002 11:55:01.777433  384344 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1002 11:54:59.549254  384787 pod_ready.go:102] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:01.550493  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:01.550524  384787 pod_ready.go:81] duration metric: took 4.495364436s waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:01.550537  384787 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	I1002 11:54:59.310529  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:54:59.811582  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.310859  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:00.810518  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.311217  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:01.336761  384965 api_server.go:72] duration metric: took 2.55808678s to wait for apiserver process to appear ...
	I1002 11:55:01.336793  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:01.336814  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:01.778891  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetIP
	I1002 11:55:01.781741  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782048  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 11:55:01.782088  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 11:55:01.782334  384344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 11:55:01.787047  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:01.803390  384344 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 11:55:01.803482  384344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 11:55:01.853839  384344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 11:55:01.853868  384344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 11:55:01.853954  384344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.853966  384344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.854164  384344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.854189  384344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.854254  384344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.854169  384344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:01.854325  384344 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1002 11:55:01.854171  384344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855315  384344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:01.855339  384344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:01.855355  384344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:01.855841  384344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:01.855856  384344 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1002 11:55:01.855809  384344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:01.855815  384344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.001299  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.001275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.002150  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1002 11:55:02.004275  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.007591  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.028882  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.199630  384344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1002 11:55:02.199751  384344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.199678  384344 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1002 11:55:02.199838  384344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.199866  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199890  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.199707  384344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1002 11:55:02.199951  384344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.199981  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305560  384344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1002 11:55:02.305618  384344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.305670  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305721  384344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1002 11:55:02.305784  384344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.305826  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305853  384344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1002 11:55:02.305893  384344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.305934  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:02.305943  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1002 11:55:02.305999  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 11:55:02.306035  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1002 11:55:02.403560  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1002 11:55:02.403701  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1002 11:55:02.403791  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.403861  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1002 11:55:02.403983  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1002 11:55:02.404056  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:02.404148  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1002 11:55:02.404200  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:02.404274  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1002 11:55:02.512787  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1002 11:55:02.512909  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:02.513038  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1002 11:55:02.513062  384344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513091  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1002 11:55:02.513169  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.2 (exists)
	I1002 11:55:02.513217  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1002 11:55:02.513258  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:02.513292  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1002 11:55:02.513343  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1002 11:55:02.513399  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:02.519549  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.2 (exists)
	I1002 11:55:02.529685  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.2 (exists)
	I1002 11:55:02.739233  384344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:03.573767  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:05.577137  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:07.577690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:06.191660  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.191697  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.191711  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.268234  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:06.268270  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:06.769081  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:06.775235  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:06.775267  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.268848  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.289255  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:07.289294  384965 api_server.go:103] status: https://192.168.61.251:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:07.769010  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:55:07.776315  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:55:07.785543  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:07.785578  384965 api_server.go:131] duration metric: took 6.448776132s to wait for apiserver health ...
	I1002 11:55:07.785620  384965 cni.go:84] Creating CNI manager for ""
	I1002 11:55:07.785630  384965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:07.963339  384965 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:07.965036  384965 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:08.003261  384965 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:08.072023  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:08.084616  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:08.084657  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:08.084670  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:08.084680  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:08.084693  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:08.084709  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:08.084723  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:08.084737  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:08.084752  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:08.084767  384965 system_pods.go:74] duration metric: took 12.715919ms to wait for pod list to return data ...
	I1002 11:55:08.084783  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:08.089289  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:08.089323  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:08.089337  384965 node_conditions.go:105] duration metric: took 4.548285ms to run NodePressure ...
	I1002 11:55:08.089359  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:08.496528  384965 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509299  384965 kubeadm.go:787] kubelet initialised
	I1002 11:55:08.509331  384965 kubeadm.go:788] duration metric: took 12.771905ms waiting for restarted kubelet to initialise ...
	I1002 11:55:08.509343  384965 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:08.516124  384965 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.528838  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.528938  384965 pod_ready.go:81] duration metric: took 12.780895ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.528967  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.529001  384965 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.534830  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534867  384965 pod_ready.go:81] duration metric: took 5.838075ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.534882  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.534892  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.549854  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549885  384965 pod_ready.go:81] duration metric: took 14.983531ms waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.549900  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.549913  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.559230  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559313  384965 pod_ready.go:81] duration metric: took 9.38728ms waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.559335  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.559347  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:08.900163  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900190  384965 pod_ready.go:81] duration metric: took 340.83496ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:08.900199  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-proxy-gchnc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:08.900208  384965 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.516054  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516096  384965 pod_ready.go:81] duration metric: took 615.877294ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.516112  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.516121  384965 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:09.701735  384965 pod_ready.go:97] node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701764  384965 pod_ready.go:81] duration metric: took 185.632721ms waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:55:09.701775  384965 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-777999" hosting pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:09.701782  384965 pod_ready.go:38] duration metric: took 1.192428133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:09.701800  384965 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:55:09.715441  384965 ops.go:34] apiserver oom_adj: -16
	I1002 11:55:09.715471  384965 kubeadm.go:640] restartCluster took 22.236518554s
	I1002 11:55:09.715483  384965 kubeadm.go:406] StartCluster complete in 22.288924118s
	I1002 11:55:09.715506  384965 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.715603  384965 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:55:09.717604  384965 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:09.832925  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:55:09.832958  384965 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:55:09.833045  384965 config.go:182] Loaded profile config "default-k8s-diff-port-777999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:55:09.833070  384965 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833078  384965 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833081  384965 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-777999"
	I1002 11:55:09.833097  384965 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.833106  384965 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:55:09.833106  384965 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:09.833108  384965 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-777999"
	W1002 11:55:09.833125  384965 addons.go:240] addon metrics-server should already be in state true
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833170  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.833570  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833592  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833615  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833624  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.833634  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.833646  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.839134  384965 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-777999" context rescaled to 1 replicas
	I1002 11:55:09.839204  384965 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.251 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:55:09.882782  384965 out.go:177] * Verifying Kubernetes components...
	I1002 11:55:09.852478  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1002 11:55:09.853164  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1002 11:55:09.853212  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I1002 11:55:09.884413  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:55:09.884847  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884862  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.884978  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.885450  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885473  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885590  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885616  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885875  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.885905  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.885931  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.885991  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886291  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.886499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.886608  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886609  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.886643  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.886650  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.890816  384965 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-777999"
	W1002 11:55:09.890840  384965 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:55:09.890874  384965 host.go:66] Checking if "default-k8s-diff-port-777999" exists ...
	I1002 11:55:09.891346  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.891381  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.905399  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1002 11:55:09.905472  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1002 11:55:09.905949  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906013  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.906516  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906548  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.906616  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.906638  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.907044  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907050  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.907204  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907296  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.907802  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1002 11:55:09.908797  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.909184  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.911200  384965 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:55:09.909554  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.909557  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.913028  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.913040  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:55:09.913097  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:55:09.913128  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.914961  384965 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102329  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.589219551s)
	I1002 11:55:10.102369  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1002 11:55:10.102405  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102437  384344 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.2: (7.58915139s)
	I1002 11:55:10.102467  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.2 (exists)
	I1002 11:55:10.102468  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2
	I1002 11:55:10.102517  384344 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (7.363200276s)
	I1002 11:55:10.102554  384344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 11:55:10.102587  384344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:10.102639  384344 ssh_runner.go:195] Run: which crictl
	I1002 11:55:10.107376  384344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:55:09.913417  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.916644  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.916734  384965 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:09.916751  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:55:09.916773  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.917177  384965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:55:09.917217  384965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:55:09.917938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.917968  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.918238  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.918494  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.918725  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.919087  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.920001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920470  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.920499  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.920702  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.920898  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.921037  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.921164  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:09.936676  384965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1002 11:55:09.937243  384965 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:55:09.937814  384965 main.go:141] libmachine: Using API Version  1
	I1002 11:55:09.937838  384965 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:55:09.938269  384965 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:55:09.938503  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetState
	I1002 11:55:09.940662  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .DriverName
	I1002 11:55:09.940930  384965 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:09.940952  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:55:09.940975  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHHostname
	I1002 11:55:09.944168  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.944929  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a7:c9", ip: ""} in network mk-default-k8s-diff-port-777999: {Iface:virbr3 ExpiryTime:2023-10-02 12:54:32 +0000 UTC Type:0 Mac:52:54:00:15:a7:c9 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:default-k8s-diff-port-777999 Clientid:01:52:54:00:15:a7:c9}
	I1002 11:55:09.944938  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHPort
	I1002 11:55:09.944972  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | domain default-k8s-diff-port-777999 has defined IP address 192.168.61.251 and MAC address 52:54:00:15:a7:c9 in network mk-default-k8s-diff-port-777999
	I1002 11:55:09.945129  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHKeyPath
	I1002 11:55:09.945323  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .GetSSHUsername
	I1002 11:55:09.945464  384965 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/default-k8s-diff-port-777999/id_rsa Username:docker}
	I1002 11:55:10.048027  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:55:10.064428  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:55:10.064457  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:55:10.113892  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:55:10.113922  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:55:10.162803  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:55:10.203352  384965 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:10.203377  384965 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 11:55:10.209916  384965 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:10.209945  384965 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:55:10.283168  384965 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:55:11.838556  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.790470973s)
	I1002 11:55:11.838584  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.675739061s)
	I1002 11:55:11.838618  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838620  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838659  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838635  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838886  384965 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.555664753s)
	I1002 11:55:11.838941  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.838954  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.838980  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.838992  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839001  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.838961  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839104  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839139  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839157  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839170  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839303  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839369  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.839409  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839421  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.839431  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.839688  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.839700  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.839710  384965 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-777999"
	I1002 11:55:11.841889  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.841915  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.842201  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842253  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.842259  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.842269  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.849511  384965 main.go:141] libmachine: Making call to close driver server
	I1002 11:55:11.849529  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) Calling .Close
	I1002 11:55:11.849874  384965 main.go:141] libmachine: (default-k8s-diff-port-777999) DBG | Closing plugin on server side
	I1002 11:55:11.849878  384965 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:55:11.849901  384965 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:55:11.853656  384965 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1002 11:55:10.075236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:12.576161  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:11.855303  384965 addons.go:502] enable addons completed in 2.022363817s: enabled=[metrics-server storage-provisioner default-storageclass]
	I1002 11:55:12.217572  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:12.931492  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.2: (2.828987001s)
	I1002 11:55:12.931534  384344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.824127868s)
	I1002 11:55:12.931594  384344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 11:55:12.931539  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 from cache
	I1002 11:55:12.931660  384344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931718  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1002 11:55:12.931728  384344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:12.939018  384344 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1002 11:55:14.293770  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.362024408s)
	I1002 11:55:14.293812  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1002 11:55:14.293844  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:14.293919  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2
	I1002 11:55:15.843943  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.2: (1.549996136s)
	I1002 11:55:15.843970  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 from cache
	I1002 11:55:15.843995  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.844044  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2
	I1002 11:55:15.077109  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:17.575669  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:14.219000  384965 node_ready.go:58] node "default-k8s-diff-port-777999" has status "Ready":"False"
	I1002 11:55:16.717611  384965 node_ready.go:49] node "default-k8s-diff-port-777999" has status "Ready":"True"
	I1002 11:55:16.717639  384965 node_ready.go:38] duration metric: took 6.514250616s waiting for node "default-k8s-diff-port-777999" to be "Ready" ...
	I1002 11:55:16.717652  384965 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:16.724331  384965 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242058  384965 pod_ready.go:92] pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.242084  384965 pod_ready.go:81] duration metric: took 517.728305ms waiting for pod "coredns-5dd5756b68-9wv56" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.242093  384965 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247916  384965 pod_ready.go:92] pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:17.247946  384965 pod_ready.go:81] duration metric: took 5.844733ms waiting for pod "etcd-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.247960  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.596133  384505 kubeadm.go:787] kubelet initialised
	I1002 11:55:18.596163  384505 kubeadm.go:788] duration metric: took 48.489169583s waiting for restarted kubelet to initialise ...
	I1002 11:55:18.596173  384505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:18.603606  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612080  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.612112  384505 pod_ready.go:81] duration metric: took 8.472159ms waiting for pod "coredns-5644d7b6d9-9xdpq" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.612124  384505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618116  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.618147  384505 pod_ready.go:81] duration metric: took 6.014635ms waiting for pod "coredns-5644d7b6d9-xrfq8" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.618159  384505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624120  384505 pod_ready.go:92] pod "etcd-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.624148  384505 pod_ready.go:81] duration metric: took 5.979959ms waiting for pod "etcd-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.624162  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631373  384505 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.631404  384505 pod_ready.go:81] duration metric: took 7.233318ms waiting for pod "kube-apiserver-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.631418  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990560  384505 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:18.990593  384505 pod_ready.go:81] duration metric: took 359.165649ms waiting for pod "kube-controller-manager-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:18.990608  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:17.708531  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.2: (1.864455947s)
	I1002 11:55:17.708567  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 from cache
	I1002 11:55:17.708616  384344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:17.708669  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2
	I1002 11:55:20.492385  384344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.2: (2.783683562s)
	I1002 11:55:20.492427  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 from cache
	I1002 11:55:20.492455  384344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:20.492508  384344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 11:55:19.575875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:22.075666  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.526494  384965 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.526525  384965 pod_ready.go:81] duration metric: took 2.278556042s waiting for pod "kube-apiserver-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.526542  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927586  384965 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:20.927626  384965 pod_ready.go:81] duration metric: took 1.401074339s waiting for pod "kube-controller-manager-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:20.927641  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117907  384965 pod_ready.go:92] pod "kube-proxy-gchnc" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.117943  384965 pod_ready.go:81] duration metric: took 190.292051ms waiting for pod "kube-proxy-gchnc" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.117957  384965 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517768  384965 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:21.517788  384965 pod_ready.go:81] duration metric: took 399.822591ms waiting for pod "kube-scheduler-default-k8s-diff-port-777999" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:21.517800  384965 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:23.829704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:19.390560  384505 pod_ready.go:92] pod "kube-proxy-gkhxb" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.390588  384505 pod_ready.go:81] duration metric: took 399.970888ms waiting for pod "kube-proxy-gkhxb" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.390602  384505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791405  384505 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:19.791443  384505 pod_ready.go:81] duration metric: took 400.826662ms waiting for pod "kube-scheduler-old-k8s-version-749860" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:19.791458  384505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:22.098383  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:24.098434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:21.439323  384344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17340-332611/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 11:55:21.439378  384344 cache_images.go:123] Successfully loaded all cached images
	I1002 11:55:21.439386  384344 cache_images.go:92] LoadImages completed in 19.585504619s
	I1002 11:55:21.439504  384344 ssh_runner.go:195] Run: crio config
	I1002 11:55:21.510657  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:21.510683  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:21.510703  384344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 11:55:21.510734  384344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-304121 NodeName:no-preload-304121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 11:55:21.511445  384344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-304121"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 11:55:21.511576  384344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-304121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 11:55:21.511643  384344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 11:55:21.522719  384344 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 11:55:21.522788  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 11:55:21.531557  384344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1002 11:55:21.548551  384344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 11:55:21.565791  384344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1002 11:55:21.583240  384344 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1002 11:55:21.587268  384344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 11:55:21.600487  384344 certs.go:56] Setting up /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121 for IP: 192.168.39.143
	I1002 11:55:21.600520  384344 certs.go:190] acquiring lock for shared ca certs: {Name:mk5312ed1c457f69b1615681878e0c59fd7d48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:55:21.600663  384344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key
	I1002 11:55:21.600697  384344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key
	I1002 11:55:21.600794  384344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/client.key
	I1002 11:55:21.600873  384344 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key.62e94479
	I1002 11:55:21.600926  384344 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key
	I1002 11:55:21.601033  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem (1338 bytes)
	W1002 11:55:21.601061  384344 certs.go:433] ignoring /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865_empty.pem, impossibly tiny 0 bytes
	I1002 11:55:21.601071  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 11:55:21.601093  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/ca.pem (1082 bytes)
	I1002 11:55:21.601118  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/cert.pem (1123 bytes)
	I1002 11:55:21.601146  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/certs/home/jenkins/minikube-integration/17340-332611/.minikube/certs/key.pem (1675 bytes)
	I1002 11:55:21.601182  384344 certs.go:437] found cert: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem (1708 bytes)
	I1002 11:55:21.601818  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 11:55:21.626860  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 11:55:21.650402  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 11:55:21.678876  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/no-preload-304121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 11:55:21.704351  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 11:55:21.729385  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 11:55:21.755185  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 11:55:21.779149  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 11:55:21.802775  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/ssl/certs/3398652.pem --> /usr/share/ca-certificates/3398652.pem (1708 bytes)
	I1002 11:55:21.825691  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 11:55:21.849575  384344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17340-332611/.minikube/certs/339865.pem --> /usr/share/ca-certificates/339865.pem (1338 bytes)
	I1002 11:55:21.872777  384344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 11:55:21.890629  384344 ssh_runner.go:195] Run: openssl version
	I1002 11:55:21.896382  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3398652.pem && ln -fs /usr/share/ca-certificates/3398652.pem /etc/ssl/certs/3398652.pem"
	I1002 11:55:21.906415  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911134  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:46 /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.911202  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3398652.pem
	I1002 11:55:21.916782  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3398652.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 11:55:21.926770  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 11:55:21.936394  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940874  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:37 /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.940944  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 11:55:21.946542  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 11:55:21.956590  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/339865.pem && ln -fs /usr/share/ca-certificates/339865.pem /etc/ssl/certs/339865.pem"
	I1002 11:55:21.966128  384344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971092  384344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:46 /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.971144  384344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/339865.pem
	I1002 11:55:21.976625  384344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/339865.pem /etc/ssl/certs/51391683.0"
	I1002 11:55:21.987142  384344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 11:55:21.991548  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 11:55:21.998311  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 11:55:22.004302  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 11:55:22.010267  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 11:55:22.016280  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 11:55:22.022273  384344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 11:55:22.027921  384344 kubeadm.go:404] StartCluster: {Name:no-preload-304121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:no-preload-304121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 11:55:22.028050  384344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 11:55:22.028141  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:22.068066  384344 cri.go:89] found id: ""
	I1002 11:55:22.068147  384344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 11:55:22.079381  384344 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 11:55:22.079406  384344 kubeadm.go:636] restartCluster start
	I1002 11:55:22.079471  384344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 11:55:22.088977  384344 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.090087  384344 kubeconfig.go:92] found "no-preload-304121" server: "https://192.168.39.143:8443"
	I1002 11:55:22.093401  384344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 11:55:22.103315  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.103378  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.114520  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.114538  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.114586  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.126040  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:22.626326  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:22.626438  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:22.637215  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.126863  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.126967  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.138035  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:23.626453  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:23.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:23.639113  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.126445  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.126541  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.139561  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.626423  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:24.626534  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:24.638442  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.127011  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.127103  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.139945  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:25.626451  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:25.626539  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:25.638919  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:26.126459  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.126551  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.140068  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:24.574146  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.574656  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.329321  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.329400  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.098690  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:28.098837  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:26.626344  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:26.626445  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:26.641274  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.126886  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.126965  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.139451  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:27.627110  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:27.627264  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:27.640675  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.126212  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.126301  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.140048  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.626433  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:28.626530  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:28.639683  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.127030  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.127142  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.139681  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:29.626803  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:29.626878  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:29.639468  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.127126  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.127231  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.140930  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:30.626441  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:30.626535  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:30.639070  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:31.126421  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.126503  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.138724  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:28.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.074607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.830079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.832350  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:30.099074  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:32.596870  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:31.627189  384344 api_server.go:166] Checking apiserver status ...
	I1002 11:55:31.627281  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 11:55:31.640362  384344 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 11:55:32.104121  384344 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 11:55:32.104153  384344 kubeadm.go:1128] stopping kube-system containers ...
	I1002 11:55:32.104169  384344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 11:55:32.104223  384344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 11:55:32.147672  384344 cri.go:89] found id: ""
	I1002 11:55:32.147756  384344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 11:55:32.164049  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:55:32.174941  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:55:32.175041  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185756  384344 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 11:55:32.185783  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:32.328093  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.120678  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.341378  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.433591  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:33.518381  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:55:33.518458  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:33.530334  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.043021  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:34.542602  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.042825  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:35.542484  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.042547  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:55:36.067551  384344 api_server.go:72] duration metric: took 2.549193903s to wait for apiserver process to appear ...
	I1002 11:55:36.067574  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:55:36.067593  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:33.076598  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.077561  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.575927  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:35.328950  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:37.330925  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:34.598649  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:36.598851  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.099902  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:40.195285  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.195318  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.195330  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.261287  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 11:55:40.261324  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 11:55:40.762016  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:40.776249  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:40.776279  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.262027  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.277940  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 11:55:41.277971  384344 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 11:55:41.762404  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 11:55:41.767751  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 11:55:41.775963  384344 api_server.go:141] control plane version: v1.28.2
	I1002 11:55:41.775988  384344 api_server.go:131] duration metric: took 5.708406738s to wait for apiserver health ...
	I1002 11:55:41.775997  384344 cni.go:84] Creating CNI manager for ""
	I1002 11:55:41.776003  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:55:41.777791  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:55:40.076215  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.574607  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:39.831982  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:42.330541  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.599812  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.097139  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:41.779495  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:55:41.796340  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:55:41.838383  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:55:41.863561  384344 system_pods.go:59] 8 kube-system pods found
	I1002 11:55:41.863600  384344 system_pods.go:61] "coredns-5dd5756b68-hn8bw" [f388b655-7f90-436d-a1fd-458f22c7f5e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:55:41.863612  384344 system_pods.go:61] "etcd-no-preload-304121" [b45507da-d57a-45f5-82a3-37b273c42747] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 11:55:41.863621  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [7f8cdde0-5050-4cea-87c5-56bd0a5d623b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 11:55:41.863630  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [24d40a92-d549-48c8-bf5f-983fdc15dcae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 11:55:41.863641  384344 system_pods.go:61] "kube-proxy-cwvr7" [9e3f08e6-92ad-4ebc-afe3-44d5ab81a63a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 11:55:41.863651  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [cc3c6828-f829-416a-9cfd-ddcc0f485578] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 11:55:41.863665  384344 system_pods.go:61] "metrics-server-57f55c9bc5-lrqt9" [7b70c72d-06b3-40ae-8e0c-ea4794cfe47b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:55:41.863682  384344 system_pods.go:61] "storage-provisioner" [457608a4-5ba9-45d2-841e-889930ce6bd7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:55:41.863694  384344 system_pods.go:74] duration metric: took 25.279676ms to wait for pod list to return data ...
	I1002 11:55:41.863707  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:55:41.870534  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:55:41.870580  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 11:55:41.870636  384344 node_conditions.go:105] duration metric: took 6.921999ms to run NodePressure ...
	I1002 11:55:41.870666  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 11:55:42.164858  384344 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169831  384344 kubeadm.go:787] kubelet initialised
	I1002 11:55:42.169855  384344 kubeadm.go:788] duration metric: took 4.969744ms waiting for restarted kubelet to initialise ...
	I1002 11:55:42.169864  384344 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:55:42.176338  384344 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.195428  384344 pod_ready.go:102] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.195763  384344 pod_ready.go:92] pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:46.195786  384344 pod_ready.go:81] duration metric: took 4.019424872s waiting for pod "coredns-5dd5756b68-hn8bw" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:46.195795  384344 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:44.581249  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:47.074875  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:44.331120  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.833248  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:46.099661  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.599051  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:48.217529  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:50.218641  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.575639  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.074550  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:49.329627  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.330613  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.330666  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:51.098233  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.098464  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:52.717990  384344 pod_ready.go:102] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:53.716716  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:53.716751  384344 pod_ready.go:81] duration metric: took 7.520948071s waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:53.716769  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738808  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.738832  384344 pod_ready.go:81] duration metric: took 1.022054915s waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.738841  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.743979  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.743997  384344 pod_ready.go:81] duration metric: took 5.14952ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.744006  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749813  384344 pod_ready.go:92] pod "kube-proxy-cwvr7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.749843  384344 pod_ready.go:81] duration metric: took 5.828956ms waiting for pod "kube-proxy-cwvr7" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.749855  384344 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913811  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 11:55:54.913840  384344 pod_ready.go:81] duration metric: took 163.97545ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.913853  384344 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	I1002 11:55:54.075263  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:56.574518  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.829643  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:58.328816  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:55.597512  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.598176  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:57.221008  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.221092  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.221270  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.075344  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:01.576898  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:00.330184  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.332041  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:55:59.599606  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:02.098251  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.098441  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.222251  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:05.721050  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:03.577043  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.075021  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:04.829434  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.830586  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.830689  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:06.100229  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.597399  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:07.725911  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.222275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:08.574907  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:11.075011  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.831040  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.330226  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:10.599336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.601338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:12.721538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:14.732864  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:13.075225  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.575267  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.831410  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.328821  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:15.098085  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.598406  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:17.220843  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:19.221812  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:18.074885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.575220  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.830090  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.329239  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:20.108397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:22.597329  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:21.723316  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.220817  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:26.222858  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:23.075276  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.574332  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.574872  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:25.330095  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.831991  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:24.598737  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:27.098098  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:28.721424  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.721466  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.074535  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.075748  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:30.330155  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:32.830009  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:29.597397  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:31.598389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.598490  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:33.223521  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.719548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:34.575020  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.074654  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.331567  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.832286  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:35.598829  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.599403  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:37.722451  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.223547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:39.075433  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:41.575885  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.329838  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.330038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:40.099862  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.598269  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:42.723887  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.221944  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.075128  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.075540  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:44.331960  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:46.829987  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:45.097469  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.098616  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:47.222108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.721938  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:48.589935  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.074993  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.331749  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:51.830280  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.830731  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:49.598433  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.097486  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.098228  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:52.222646  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:54.726547  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:53.076322  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:55.575236  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.329005  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.330077  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:56.598418  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.098019  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:57.221753  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:59.721824  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:56:58.074481  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.576860  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:00.831342  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.328695  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:01.598124  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.098241  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:02.221634  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:04.222422  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:03.075152  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.076964  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.577621  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:05.328811  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:07.329223  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.598041  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.097384  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:06.724181  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.221108  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.223407  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:10.077910  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:12.574292  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:09.331559  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.828655  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.829065  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:11.098632  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.099363  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:13.721785  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.222201  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:14.574467  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:16.576124  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.829618  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:17.830298  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:15.598739  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.097854  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:18.722947  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.220868  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:19.074608  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:21.079563  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.329680  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.335299  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:20.109847  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:22.598994  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.221458  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.222249  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:23.575662  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.075111  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:24.829500  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:26.830678  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:25.099426  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.598577  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:27.721159  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.725949  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:28.574416  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.576031  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:29.330079  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:31.330829  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.829243  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:30.098615  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.598161  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:32.220933  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.720190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:33.075330  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.075824  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.574487  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:35.829585  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:38.333997  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:34.598838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:37.098682  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:36.723779  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.222751  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.074293  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:42.574665  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:40.829324  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.329265  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:39.598047  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.598338  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:44.097421  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:41.720538  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:43.721398  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.220972  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.074832  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.573962  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:45.330175  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:47.829115  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:46.097496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.098108  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:48.221977  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.222810  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.576755  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.076442  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:49.829764  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.330051  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:50.099771  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.599534  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:52.223223  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.721544  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.574341  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.574466  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:54.829215  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:56.829468  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.829730  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:55.097141  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.598230  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:57.221854  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.721190  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:58.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.575201  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:00.830156  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.329206  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:57:59.599838  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:02.097630  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.099434  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:01.724512  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:04.223282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:03.076896  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.576101  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:05.330313  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:07.830038  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.597389  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.098677  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:06.721370  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.723225  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.224608  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:08.076078  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:10.574982  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.575115  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:09.832412  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:12.330220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:11.597760  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.598933  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:13.726487  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.220404  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.575310  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.576156  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:14.330536  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.829762  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.833076  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:16.099600  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.599713  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:18.222118  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:20.722548  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:19.076690  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.575073  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.330604  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.829742  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:21.099777  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.598614  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.220183  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.221895  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:23.575355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.575510  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:25.830538  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.329783  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:26.097290  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.097568  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:27.722661  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.221305  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:28.074457  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.074944  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.075905  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.831228  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:33.328903  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:30.098502  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.599120  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:32.221445  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.224133  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:34.075953  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.574997  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.330632  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.830117  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:35.101830  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:37.597886  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:36.722453  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:38.722619  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.725507  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.077321  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.574812  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:40.329004  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:42.329704  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:39.598243  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:41.600336  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.098496  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.225247  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:45.721116  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:43.574928  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.073774  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:44.830119  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.330229  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:46.101053  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.597255  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:47.724301  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.220275  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:48.074634  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.075498  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.576147  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:49.829149  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.328994  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:50.598113  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:53.096876  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:52.224282  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.721074  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.576355  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.074445  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:54.330474  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.331220  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.829693  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:55.098655  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:57.598659  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:56.721698  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:58.721958  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.222685  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:58:59.074760  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.076178  384787 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:01.551409  384787 pod_ready.go:81] duration metric: took 4m0.000833874s waiting for pod "metrics-server-57f55c9bc5-d8c7b" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:01.551453  384787 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:01.551481  384787 pod_ready.go:38] duration metric: took 4m12.797362192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:01.551549  384787 kubeadm.go:640] restartCluster took 4m35.116019688s
	W1002 11:59:01.551687  384787 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:01.551757  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:00.830381  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.830963  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:00.103080  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:02.600662  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:03.720777  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.722315  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.330034  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.835944  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:05.098121  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.098246  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:09.099171  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:07.725245  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.221073  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:10.328885  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:12.331198  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:11.599122  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.099609  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.268063  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.716271748s)
	I1002 11:59:15.268160  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:15.282632  384787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:15.294231  384787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:15.305847  384787 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:15.305892  384787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 11:59:15.365627  384787 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 11:59:15.365703  384787 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:15.546049  384787 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:15.546175  384787 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:15.546300  384787 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:15.810889  384787 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:12.221147  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:14.222293  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.223901  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:15.813908  384787 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:15.814079  384787 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:15.814178  384787 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:15.814257  384787 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:15.814309  384787 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:15.814451  384787 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:15.814528  384787 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:15.814874  384787 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:15.815489  384787 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:15.816067  384787 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:15.816586  384787 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:15.817099  384787 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:15.817161  384787 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:15.988485  384787 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:16.038665  384787 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:16.218038  384787 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:16.415133  384787 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:16.415531  384787 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:16.418000  384787 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:16.420952  384787 out.go:204]   - Booting up control plane ...
	I1002 11:59:16.421147  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:16.421273  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:16.423255  384787 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:16.442699  384787 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:16.443964  384787 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:16.444055  384787 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 11:59:16.602169  384787 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:14.331978  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.830188  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.831449  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:16.597731  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.598683  384505 pod_ready.go:102] pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:18.722865  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.222671  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.329396  384965 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:21.518315  384965 pod_ready.go:81] duration metric: took 4m0.000482629s waiting for pod "metrics-server-57f55c9bc5-wk2c7" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:21.518363  384965 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:21.518378  384965 pod_ready.go:38] duration metric: took 4m4.800712941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:21.518406  384965 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:21.518451  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:21.518519  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:21.587182  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:21.587210  384965 cri.go:89] found id: ""
	I1002 11:59:21.587221  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:21.587285  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.592996  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:21.593072  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:21.635267  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:21.635293  384965 cri.go:89] found id: ""
	I1002 11:59:21.635306  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:21.635367  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.640347  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:21.640428  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:21.686113  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:21.686146  384965 cri.go:89] found id: ""
	I1002 11:59:21.686157  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:21.686224  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.691867  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:21.691959  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:21.745210  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:21.745245  384965 cri.go:89] found id: ""
	I1002 11:59:21.745257  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:21.745330  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.750774  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:21.750862  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:21.810054  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:21.810084  384965 cri.go:89] found id: ""
	I1002 11:59:21.810099  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:21.810161  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.815433  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:21.815518  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:21.858759  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:21.858794  384965 cri.go:89] found id: ""
	I1002 11:59:21.858807  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:21.858887  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.864818  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:21.864900  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:21.920312  384965 cri.go:89] found id: ""
	I1002 11:59:21.920343  384965 logs.go:284] 0 containers: []
	W1002 11:59:21.920353  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:21.920362  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:21.920429  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:21.964677  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:21.964708  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:21.964715  384965 cri.go:89] found id: ""
	I1002 11:59:21.964724  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:21.964812  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.970514  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:21.976118  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:21.976158  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:22.026289  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:22.026337  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:22.094330  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:22.094389  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:22.133879  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:22.133911  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:22.186645  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:22.186688  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:22.200091  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:22.200132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:22.245383  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:22.245420  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:22.312167  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:22.312212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:22.358596  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:22.358631  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:22.417643  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:22.417695  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:22.467793  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:22.467830  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:22.509173  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:22.509216  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:23.037502  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:23.037554  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:19.792274  384505 pod_ready.go:81] duration metric: took 4m0.000796599s waiting for pod "metrics-server-74d5856cc6-8rbnz" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:19.792309  384505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:19.792337  384505 pod_ready.go:38] duration metric: took 4m1.196150969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:19.792389  384505 kubeadm.go:640] restartCluster took 5m11.202020009s
	W1002 11:59:19.792478  384505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:19.792509  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:24.926525  384505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.133982838s)
	I1002 11:59:24.926616  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:24.943054  384505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 11:59:24.953201  384505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 11:59:24.963105  384505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 11:59:24.963158  384505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1002 11:59:25.027860  384505 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1002 11:59:25.027986  384505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 11:59:25.214224  384505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 11:59:25.214399  384505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 11:59:25.214529  384505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 11:59:25.472019  384505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 11:59:25.472706  384505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 11:59:25.481965  384505 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1002 11:59:25.630265  384505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 11:59:25.105120  384787 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502545 seconds
	I1002 11:59:25.105321  384787 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:25.124191  384787 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:25.659886  384787 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:25.660110  384787 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-487027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 11:59:26.180742  384787 kubeadm.go:322] [bootstrap-token] Using token: tg9u90.7q86afgrs7pieyop
	I1002 11:59:23.723485  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:25.724673  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:26.182574  384787 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:26.182738  384787 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:26.190559  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 11:59:26.200659  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:26.212391  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:26.217946  384787 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:26.226534  384787 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:26.248000  384787 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 11:59:26.545226  384787 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:26.604475  384787 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:26.605636  384787 kubeadm.go:322] 
	I1002 11:59:26.605726  384787 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:26.605738  384787 kubeadm.go:322] 
	I1002 11:59:26.605810  384787 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:26.605815  384787 kubeadm.go:322] 
	I1002 11:59:26.605844  384787 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:26.605914  384787 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:26.605973  384787 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:26.605981  384787 kubeadm.go:322] 
	I1002 11:59:26.606052  384787 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 11:59:26.606058  384787 kubeadm.go:322] 
	I1002 11:59:26.606097  384787 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 11:59:26.606101  384787 kubeadm.go:322] 
	I1002 11:59:26.606143  384787 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:26.606203  384787 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:26.606263  384787 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:26.606267  384787 kubeadm.go:322] 
	I1002 11:59:26.606334  384787 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 11:59:26.606438  384787 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:26.606446  384787 kubeadm.go:322] 
	I1002 11:59:26.606580  384787 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.606732  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:26.606764  384787 kubeadm.go:322] 	--control-plane 
	I1002 11:59:26.606773  384787 kubeadm.go:322] 
	I1002 11:59:26.606906  384787 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:26.606919  384787 kubeadm.go:322] 
	I1002 11:59:26.607066  384787 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tg9u90.7q86afgrs7pieyop \
	I1002 11:59:26.607192  384787 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:26.608470  384787 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:26.608503  384787 cni.go:84] Creating CNI manager for ""
	I1002 11:59:26.608547  384787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:26.610426  384787 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:25.632074  384505 out.go:204]   - Generating certificates and keys ...
	I1002 11:59:25.632197  384505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 11:59:25.632294  384505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 11:59:25.632398  384505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 11:59:25.632546  384505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 11:59:25.632693  384505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 11:59:25.633319  384505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 11:59:25.633417  384505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 11:59:25.633720  384505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 11:59:25.634302  384505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 11:59:25.635341  384505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 11:59:25.635391  384505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 11:59:25.635461  384505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 11:59:25.743684  384505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 11:59:25.940709  384505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 11:59:26.418951  384505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 11:59:26.676172  384505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 11:59:26.677698  384505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 11:59:26.612002  384787 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:26.646809  384787 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:26.709486  384787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:26.709648  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.709720  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=embed-certs-487027 minikube.k8s.io/updated_at=2023_10_02T11_59_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:26.778472  384787 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:27.199359  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:27.351099  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:25.716079  384965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:25.739754  384965 api_server.go:72] duration metric: took 4m15.900505961s to wait for apiserver process to appear ...
	I1002 11:59:25.739785  384965 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:25.739834  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:25.739904  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:25.788719  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:25.788747  384965 cri.go:89] found id: ""
	I1002 11:59:25.788758  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:25.788824  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.794426  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:25.794500  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:25.836689  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:25.836721  384965 cri.go:89] found id: ""
	I1002 11:59:25.836731  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:25.836808  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.841671  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:25.841744  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:25.883947  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:25.883976  384965 cri.go:89] found id: ""
	I1002 11:59:25.883986  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:25.884049  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.892631  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:25.892758  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:25.966469  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:25.966502  384965 cri.go:89] found id: ""
	I1002 11:59:25.966514  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:25.966575  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:25.971814  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:25.971890  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:26.020970  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.021002  384965 cri.go:89] found id: ""
	I1002 11:59:26.021013  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:26.021076  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.025582  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:26.025657  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:26.077339  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.077371  384965 cri.go:89] found id: ""
	I1002 11:59:26.077383  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:26.077448  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.082311  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:26.082396  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:26.126803  384965 cri.go:89] found id: ""
	I1002 11:59:26.126833  384965 logs.go:284] 0 containers: []
	W1002 11:59:26.126843  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:26.126851  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:26.126992  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:26.176829  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.176858  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.176866  384965 cri.go:89] found id: ""
	I1002 11:59:26.176876  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:26.176945  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.182892  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:26.189288  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:26.189316  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:26.257856  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:26.257910  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:26.297691  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:26.297747  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:26.351211  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:26.351254  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:26.425373  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:26.425416  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:26.568944  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:26.568985  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:26.627406  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:26.627449  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:26.641249  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:26.641281  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:26.696939  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:26.696974  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:26.744365  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:26.744406  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:27.279579  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:27.279639  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:27.366447  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:27.366508  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:27.436429  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:27.436476  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:26.679464  384505 out.go:204]   - Booting up control plane ...
	I1002 11:59:26.679594  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 11:59:26.688060  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 11:59:26.700892  384505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 11:59:26.702245  384505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 11:59:26.706277  384505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 11:59:28.222692  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:30.223561  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:27.973079  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.472938  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:28.973900  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.473650  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.972984  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.473216  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:30.973931  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.474026  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:31.973024  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:32.473723  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:29.989828  384965 api_server.go:253] Checking apiserver healthz at https://192.168.61.251:8444/healthz ...
	I1002 11:59:29.995664  384965 api_server.go:279] https://192.168.61.251:8444/healthz returned 200:
	ok
	I1002 11:59:29.998819  384965 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:29.998846  384965 api_server.go:131] duration metric: took 4.25905343s to wait for apiserver health ...
	I1002 11:59:29.998855  384965 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:29.998882  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 11:59:29.998944  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 11:59:30.037898  384965 cri.go:89] found id: "3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.037925  384965 cri.go:89] found id: ""
	I1002 11:59:30.037935  384965 logs.go:284] 1 containers: [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735]
	I1002 11:59:30.038014  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.042751  384965 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 11:59:30.042835  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 11:59:30.085339  384965 cri.go:89] found id: "8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.085378  384965 cri.go:89] found id: ""
	I1002 11:59:30.085390  384965 logs.go:284] 1 containers: [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d]
	I1002 11:59:30.085463  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.090184  384965 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 11:59:30.090265  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 11:59:30.130574  384965 cri.go:89] found id: "f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.130602  384965 cri.go:89] found id: ""
	I1002 11:59:30.130611  384965 logs.go:284] 1 containers: [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d]
	I1002 11:59:30.130665  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.135040  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 11:59:30.135125  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 11:59:30.178044  384965 cri.go:89] found id: "7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:30.178067  384965 cri.go:89] found id: ""
	I1002 11:59:30.178078  384965 logs.go:284] 1 containers: [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e]
	I1002 11:59:30.178144  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.182586  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 11:59:30.182662  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 11:59:30.226121  384965 cri.go:89] found id: "d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:30.226142  384965 cri.go:89] found id: ""
	I1002 11:59:30.226152  384965 logs.go:284] 1 containers: [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6]
	I1002 11:59:30.226209  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.231080  384965 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 11:59:30.231156  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 11:59:30.275499  384965 cri.go:89] found id: "beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.275533  384965 cri.go:89] found id: ""
	I1002 11:59:30.275545  384965 logs.go:284] 1 containers: [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f]
	I1002 11:59:30.275611  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.281023  384965 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 11:59:30.281089  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 11:59:30.325580  384965 cri.go:89] found id: ""
	I1002 11:59:30.325610  384965 logs.go:284] 0 containers: []
	W1002 11:59:30.325622  384965 logs.go:286] No container was found matching "kindnet"
	I1002 11:59:30.325630  384965 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 11:59:30.325691  384965 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 11:59:30.372727  384965 cri.go:89] found id: "2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.372760  384965 cri.go:89] found id: "b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.372766  384965 cri.go:89] found id: ""
	I1002 11:59:30.372776  384965 logs.go:284] 2 containers: [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358]
	I1002 11:59:30.372838  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.377541  384965 ssh_runner.go:195] Run: which crictl
	I1002 11:59:30.382371  384965 logs.go:123] Gathering logs for kubelet ...
	I1002 11:59:30.382403  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 11:59:30.449081  384965 logs.go:123] Gathering logs for etcd [8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d] ...
	I1002 11:59:30.449132  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b9af145fa743d0b4d8200eccae2d61aec9232571e7b18b4ada857e5fabbb50d"
	I1002 11:59:30.519339  384965 logs.go:123] Gathering logs for coredns [f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d] ...
	I1002 11:59:30.519392  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4357b618abec5960645a567e36c4f2ba77ba1c15da237323a5f2f109f47581d"
	I1002 11:59:30.566205  384965 logs.go:123] Gathering logs for storage-provisioner [2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175] ...
	I1002 11:59:30.566250  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3596d8e4114374cbe4681be5d35e466e30c670977692d44a0e7220797d7175"
	I1002 11:59:30.607933  384965 logs.go:123] Gathering logs for container status ...
	I1002 11:59:30.607973  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 11:59:30.655904  384965 logs.go:123] Gathering logs for kube-apiserver [3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735] ...
	I1002 11:59:30.655946  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d34e284efffda2b7100e5be47bc9e7c06b1760aa227b6dabebb610abcb86735"
	I1002 11:59:30.717563  384965 logs.go:123] Gathering logs for kube-controller-manager [beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f] ...
	I1002 11:59:30.717619  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beb885cf3eedd24d828bd1e907ba546e1d2876b787d4bf0680242f36ef1c774f"
	I1002 11:59:30.779216  384965 logs.go:123] Gathering logs for storage-provisioner [b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358] ...
	I1002 11:59:30.779268  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5dd54a6498cc46fda51d95f1069a102af45c7e6539db684fb62b09a8115d358"
	I1002 11:59:30.822075  384965 logs.go:123] Gathering logs for CRI-O ...
	I1002 11:59:30.822114  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 11:59:31.180609  384965 logs.go:123] Gathering logs for dmesg ...
	I1002 11:59:31.180664  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 11:59:31.196239  384965 logs.go:123] Gathering logs for describe nodes ...
	I1002 11:59:31.196274  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 11:59:31.345274  384965 logs.go:123] Gathering logs for kube-scheduler [7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e] ...
	I1002 11:59:31.345318  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a5a17cf18027adeb3127f8a5a2bf60d7480321e1c6e1b7c1384a45fd2a0866e"
	I1002 11:59:31.392175  384965 logs.go:123] Gathering logs for kube-proxy [d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6] ...
	I1002 11:59:31.392212  384965 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d858d8eba37bc06638192465442b41c67b495e575c378d8c7da4408501609ff6"
	I1002 11:59:33.946599  384965 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:33.946635  384965 system_pods.go:61] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.946643  384965 system_pods.go:61] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.946650  384965 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.946656  384965 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.946659  384965 system_pods.go:61] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.946664  384965 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.946677  384965 system_pods.go:61] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.946687  384965 system_pods.go:61] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.946704  384965 system_pods.go:74] duration metric: took 3.947840874s to wait for pod list to return data ...
	I1002 11:59:33.946715  384965 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:33.950028  384965 default_sa.go:45] found service account: "default"
	I1002 11:59:33.950059  384965 default_sa.go:55] duration metric: took 3.333093ms for default service account to be created ...
	I1002 11:59:33.950069  384965 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:33.956623  384965 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:33.956651  384965 system_pods.go:89] "coredns-5dd5756b68-9wv56" [f04d6125-ea28-41cc-9251-7ccee27162bc] Running
	I1002 11:59:33.956657  384965 system_pods.go:89] "etcd-default-k8s-diff-port-777999" [5bc34f24-2922-4ce8-b11d-935f9b3c8b4c] Running
	I1002 11:59:33.956662  384965 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-777999" [b44f8ca6-f43e-4c99-af8d-23255f94257c] Running
	I1002 11:59:33.956666  384965 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-777999" [5e20830f-d51b-4ca7-a7b6-e0f24f4a50e1] Running
	I1002 11:59:33.956670  384965 system_pods.go:89] "kube-proxy-gchnc" [061811c7-2ac8-448a-b441-838f9aaf9145] Running
	I1002 11:59:33.956674  384965 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-777999" [9d874541-7d2e-4c9b-8935-2bf7386e6a07] Running
	I1002 11:59:33.956681  384965 system_pods.go:89] "metrics-server-57f55c9bc5-wk2c7" [f28e9db7-2182-40d8-85a7-fa40c2ff8077] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:33.956686  384965 system_pods.go:89] "storage-provisioner" [aff1275b-909d-4c70-9fb5-cb36170c591e] Running
	I1002 11:59:33.956694  384965 system_pods.go:126] duration metric: took 6.618721ms to wait for k8s-apps to be running ...
	I1002 11:59:33.956704  384965 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:33.956749  384965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:33.976674  384965 system_svc.go:56] duration metric: took 19.952308ms WaitForService to wait for kubelet.
	I1002 11:59:33.976710  384965 kubeadm.go:581] duration metric: took 4m24.137472355s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:33.976750  384965 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:33.982173  384965 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:33.982211  384965 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:33.982227  384965 node_conditions.go:105] duration metric: took 5.470843ms to run NodePressure ...
	I1002 11:59:33.982242  384965 start.go:228] waiting for startup goroutines ...
	I1002 11:59:33.982251  384965 start.go:233] waiting for cluster config update ...
	I1002 11:59:33.982303  384965 start.go:242] writing updated cluster config ...
	I1002 11:59:33.982687  384965 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:34.039684  384965 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:34.041739  384965 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-777999" cluster and "default" namespace by default
	I1002 11:59:32.723475  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:35.221523  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:32.973400  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.473644  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:33.973820  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.473607  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:34.973848  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.473328  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:35.973485  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.473888  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:36.973837  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.473514  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:37.973633  384787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.094807  384787 kubeadm.go:1081] duration metric: took 11.38520709s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:38.094846  384787 kubeadm.go:406] StartCluster complete in 5m11.722637512s
	I1002 11:59:38.094872  384787 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.094972  384787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:38.097201  384787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:38.097495  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:38.097829  384787 config.go:182] Loaded profile config "embed-certs-487027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:59:38.097966  384787 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:38.098056  384787 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-487027"
	I1002 11:59:38.098079  384787 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-487027"
	I1002 11:59:38.098083  384787 addons.go:69] Setting default-storageclass=true in profile "embed-certs-487027"
	I1002 11:59:38.098098  384787 addons.go:69] Setting metrics-server=true in profile "embed-certs-487027"
	I1002 11:59:38.098110  384787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-487027"
	I1002 11:59:38.098113  384787 addons.go:231] Setting addon metrics-server=true in "embed-certs-487027"
	W1002 11:59:38.098125  384787 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:38.098177  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098608  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098643  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.098647  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1002 11:59:38.098092  384787 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:38.098827  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.098670  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.099207  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.099235  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.118215  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40451
	I1002 11:59:38.118691  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.119232  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.119260  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.119649  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.120147  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.120182  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.129398  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I1002 11:59:38.129652  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35275
	I1002 11:59:38.130092  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.130723  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.130746  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.131301  384787 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-487027" context rescaled to 1 replicas
	I1002 11:59:38.131342  384787 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:38.133196  384787 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:38.134675  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:38.132825  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.134964  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.135242  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.135408  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.135434  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.135834  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.136413  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.136455  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.138974  384787 addons.go:231] Setting addon default-storageclass=true in "embed-certs-487027"
	W1002 11:59:38.138995  384787 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:38.139025  384787 host.go:66] Checking if "embed-certs-487027" exists ...
	I1002 11:59:38.139434  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.139469  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.141226  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1002 11:59:38.141643  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.142086  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.142104  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.142433  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.142609  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.144425  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.146525  384787 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:38.148187  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:38.148204  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:38.148227  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.152187  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152549  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.152575  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.152783  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.152988  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.153139  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.153280  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.157114  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 11:59:38.157655  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.158192  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.158211  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.158619  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.159253  384787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:38.159290  384787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:38.159506  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I1002 11:59:38.159895  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.160383  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.160395  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.160727  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.160902  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.162835  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.164490  384787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:37.211498  384505 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504818 seconds
	I1002 11:59:37.211660  384505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 11:59:37.229976  384505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 11:59:37.759297  384505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 11:59:37.759467  384505 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-749860 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 11:59:38.284135  384505 kubeadm.go:322] [bootstrap-token] Using token: rt49x4.7033jvaiaszsonci
	I1002 11:59:38.285950  384505 out.go:204]   - Configuring RBAC rules ...
	I1002 11:59:38.286108  384505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 11:59:38.299290  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 11:59:38.306326  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 11:59:38.312137  384505 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 11:59:38.320028  384505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 11:59:38.439411  384505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 11:59:38.704007  384505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 11:59:38.705937  384505 kubeadm.go:322] 
	I1002 11:59:38.706075  384505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 11:59:38.706096  384505 kubeadm.go:322] 
	I1002 11:59:38.706210  384505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 11:59:38.706221  384505 kubeadm.go:322] 
	I1002 11:59:38.706256  384505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 11:59:38.706341  384505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 11:59:38.706433  384505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 11:59:38.706448  384505 kubeadm.go:322] 
	I1002 11:59:38.706527  384505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 11:59:38.706614  384505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 11:59:38.706701  384505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 11:59:38.706712  384505 kubeadm.go:322] 
	I1002 11:59:38.706805  384505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 11:59:38.706898  384505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 11:59:38.706910  384505 kubeadm.go:322] 
	I1002 11:59:38.707003  384505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707134  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 11:59:38.707169  384505 kubeadm.go:322]     --control-plane 	  
	I1002 11:59:38.707179  384505 kubeadm.go:322] 
	I1002 11:59:38.707272  384505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 11:59:38.707283  384505 kubeadm.go:322] 
	I1002 11:59:38.707373  384505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rt49x4.7033jvaiaszsonci \
	I1002 11:59:38.707500  384505 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 11:59:38.708451  384505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 11:59:38.708478  384505 cni.go:84] Creating CNI manager for ""
	I1002 11:59:38.708501  384505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 11:59:38.710166  384505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 11:59:38.711596  384505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 11:59:38.725385  384505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 11:59:38.748155  384505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 11:59:38.748294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.748295  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-749860 minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.795585  384505 ops.go:34] apiserver oom_adj: -16
	I1002 11:59:39.068200  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:38.166036  384787 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.166047  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:38.166063  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.169435  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.169903  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.169929  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.170098  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.170273  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.170517  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.170711  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.177450  384787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I1002 11:59:38.178044  384787 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:38.178596  384787 main.go:141] libmachine: Using API Version  1
	I1002 11:59:38.178616  384787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:38.179009  384787 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:38.179244  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetState
	I1002 11:59:38.181209  384787 main.go:141] libmachine: (embed-certs-487027) Calling .DriverName
	I1002 11:59:38.181596  384787 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.181613  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:38.181641  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHHostname
	I1002 11:59:38.185272  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.185785  384787 main.go:141] libmachine: (embed-certs-487027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:60:23", ip: ""} in network mk-embed-certs-487027: {Iface:virbr4 ExpiryTime:2023-10-02 12:54:11 +0000 UTC Type:0 Mac:52:54:00:06:60:23 Iaid: IPaddr:192.168.72.147 Prefix:24 Hostname:embed-certs-487027 Clientid:01:52:54:00:06:60:23}
	I1002 11:59:38.185813  384787 main.go:141] libmachine: (embed-certs-487027) DBG | domain embed-certs-487027 has defined IP address 192.168.72.147 and MAC address 52:54:00:06:60:23 in network mk-embed-certs-487027
	I1002 11:59:38.186245  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHPort
	I1002 11:59:38.186539  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHKeyPath
	I1002 11:59:38.186748  384787 main.go:141] libmachine: (embed-certs-487027) Calling .GetSSHUsername
	I1002 11:59:38.186938  384787 sshutil.go:53] new ssh client: &{IP:192.168.72.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/embed-certs-487027/id_rsa Username:docker}
	I1002 11:59:38.337092  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:38.337129  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:38.379388  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:38.389992  384787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.390060  384787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:38.399264  384787 node_ready.go:49] node "embed-certs-487027" has status "Ready":"True"
	I1002 11:59:38.399295  384787 node_ready.go:38] duration metric: took 9.264648ms waiting for node "embed-certs-487027" to be "Ready" ...
	I1002 11:59:38.399308  384787 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:38.401885  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:38.401909  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:38.406757  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:38.438158  384787 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.458749  384787 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.458784  384787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:38.517143  384787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:38.547128  384787 pod_ready.go:92] pod "etcd-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.547161  384787 pod_ready.go:81] duration metric: took 108.899374ms waiting for pod "etcd-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.547176  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744560  384787 pod_ready.go:92] pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.744587  384787 pod_ready.go:81] duration metric: took 197.40322ms waiting for pod "kube-apiserver-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.744598  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852242  384787 pod_ready.go:92] pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:38.852277  384787 pod_ready.go:81] duration metric: took 107.671499ms waiting for pod "kube-controller-manager-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:38.852294  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.017545  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.638113738s)
	I1002 11:59:41.017602  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017613  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017597  384787 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.627499125s)
	I1002 11:59:41.017658  384787 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:41.017718  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.610925223s)
	I1002 11:59:41.017747  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017759  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.017907  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.017960  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.017977  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.017994  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018535  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018549  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018559  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.018568  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.018636  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.018645  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.018679  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019046  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.019049  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.019064  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.027153  384787 pod_ready.go:102] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.049978  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.050007  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.050369  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.050391  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.100800  384787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.583606696s)
	I1002 11:59:41.100870  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.100900  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101237  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101258  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101268  384787 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:41.101278  384787 main.go:141] libmachine: (embed-certs-487027) Calling .Close
	I1002 11:59:41.101576  384787 main.go:141] libmachine: (embed-certs-487027) DBG | Closing plugin on server side
	I1002 11:59:41.101621  384787 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:41.101634  384787 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:41.101647  384787 addons.go:467] Verifying addon metrics-server=true in "embed-certs-487027"
	I1002 11:59:41.103637  384787 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:37.222165  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:39.223800  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:41.105142  384787 addons.go:502] enable addons completed in 3.007188775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:41.492039  384787 pod_ready.go:92] pod "kube-proxy-6g7f7" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.492067  384787 pod_ready.go:81] duration metric: took 2.639765498s waiting for pod "kube-proxy-6g7f7" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.492081  384787 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500950  384787 pod_ready.go:92] pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace has status "Ready":"True"
	I1002 11:59:41.500979  384787 pod_ready.go:81] duration metric: took 8.889098ms waiting for pod "kube-scheduler-embed-certs-487027" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:41.500990  384787 pod_ready.go:38] duration metric: took 3.101668727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:41.501012  384787 api_server.go:52] waiting for apiserver process to appear ...
	I1002 11:59:41.501079  384787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:59:41.533141  384787 api_server.go:72] duration metric: took 3.401757173s to wait for apiserver process to appear ...
	I1002 11:59:41.533167  384787 api_server.go:88] waiting for apiserver healthz status ...
	I1002 11:59:41.533183  384787 api_server.go:253] Checking apiserver healthz at https://192.168.72.147:8443/healthz ...
	I1002 11:59:41.543027  384787 api_server.go:279] https://192.168.72.147:8443/healthz returned 200:
	ok
	I1002 11:59:41.545456  384787 api_server.go:141] control plane version: v1.28.2
	I1002 11:59:41.545483  384787 api_server.go:131] duration metric: took 12.308941ms to wait for apiserver health ...
	I1002 11:59:41.545494  384787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 11:59:41.556090  384787 system_pods.go:59] 8 kube-system pods found
	I1002 11:59:41.556183  384787 system_pods.go:61] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.556209  384787 system_pods.go:61] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.556247  384787 system_pods.go:61] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.556272  384787 system_pods.go:61] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.556290  384787 system_pods.go:61] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.556306  384787 system_pods.go:61] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.556329  384787 system_pods.go:61] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.556366  384787 system_pods.go:61] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.556392  384787 system_pods.go:74] duration metric: took 10.889958ms to wait for pod list to return data ...
	I1002 11:59:41.556412  384787 default_sa.go:34] waiting for default service account to be created ...
	I1002 11:59:41.594659  384787 default_sa.go:45] found service account: "default"
	I1002 11:59:41.594690  384787 default_sa.go:55] duration metric: took 38.261546ms for default service account to be created ...
	I1002 11:59:41.594701  384787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 11:59:41.800342  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:41.800375  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:41.800382  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:41.800388  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:41.800393  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:41.800397  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:41.800401  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:41.800407  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:41.800412  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:41.800431  384787 retry.go:31] will retry after 300.830497ms: missing components: kube-dns
	I1002 11:59:42.116978  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.117028  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.117039  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.117048  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.117058  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.117064  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.117071  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.117080  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.117089  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.117109  384787 retry.go:31] will retry after 380.49084ms: missing components: kube-dns
	I1002 11:59:42.506867  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.506901  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.506908  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.506914  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.506919  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.506923  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.506927  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.506933  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.506939  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.506954  384787 retry.go:31] will retry after 409.062449ms: missing components: kube-dns
	I1002 11:59:42.924401  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:42.924443  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 11:59:42.924456  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:42.924464  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:42.924471  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:42.924477  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:42.924484  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:42.924493  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:42.924503  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 11:59:42.924524  384787 retry.go:31] will retry after 544.758887ms: missing components: kube-dns
	I1002 11:59:43.477592  384787 system_pods.go:86] 8 kube-system pods found
	I1002 11:59:43.477622  384787 system_pods.go:89] "coredns-5dd5756b68-qbmwd" [54a61868-45fc-40cd-8887-0609835639c1] Running
	I1002 11:59:43.477628  384787 system_pods.go:89] "etcd-embed-certs-487027" [a17bdc4f-f236-4438-8030-f59e60684512] Running
	I1002 11:59:43.477632  384787 system_pods.go:89] "kube-apiserver-embed-certs-487027" [4ea3b962-dfa4-49ef-9d4f-bbdf1a4b399f] Running
	I1002 11:59:43.477637  384787 system_pods.go:89] "kube-controller-manager-embed-certs-487027" [4aedb55c-7145-4b7c-9f36-f03c4fedab55] Running
	I1002 11:59:43.477640  384787 system_pods.go:89] "kube-proxy-6g7f7" [37b0eff0-06cb-4b57-b679-970c738d0485] Running
	I1002 11:59:43.477645  384787 system_pods.go:89] "kube-scheduler-embed-certs-487027" [915ead65-b694-445d-9946-375582d4f094] Running
	I1002 11:59:43.477651  384787 system_pods.go:89] "metrics-server-57f55c9bc5-hbb5d" [2bf56144-ca7b-4688-883e-372101260b52] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 11:59:43.477657  384787 system_pods.go:89] "storage-provisioner" [97b21176-98f2-4fb6-98ea-1435def0edd9] Running
	I1002 11:59:43.477665  384787 system_pods.go:126] duration metric: took 1.882959518s to wait for k8s-apps to be running ...
	I1002 11:59:43.477672  384787 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 11:59:43.477714  384787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:43.492105  384787 system_svc.go:56] duration metric: took 14.416995ms WaitForService to wait for kubelet.
	I1002 11:59:43.492138  384787 kubeadm.go:581] duration metric: took 5.360761991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 11:59:43.492161  384787 node_conditions.go:102] verifying NodePressure condition ...
	I1002 11:59:43.496739  384787 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 11:59:43.496769  384787 node_conditions.go:123] node cpu capacity is 2
	I1002 11:59:43.496785  384787 node_conditions.go:105] duration metric: took 4.61842ms to run NodePressure ...
	I1002 11:59:43.496801  384787 start.go:228] waiting for startup goroutines ...
	I1002 11:59:43.496810  384787 start.go:233] waiting for cluster config update ...
	I1002 11:59:43.496823  384787 start.go:242] writing updated cluster config ...
	I1002 11:59:43.497156  384787 ssh_runner.go:195] Run: rm -f paused
	I1002 11:59:43.568627  384787 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 11:59:43.570324  384787 out.go:177] * Done! kubectl is now configured to use "embed-certs-487027" cluster and "default" namespace by default
	I1002 11:59:39.194035  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:39.810338  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.310222  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:40.809912  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.310004  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.810506  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.309581  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:42.810312  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.310294  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:43.809602  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:41.722699  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.221300  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:44.309927  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:44.810169  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.310095  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:45.809546  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.310144  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.809605  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.310487  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:47.809697  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.309464  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:48.809680  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:46.723036  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.220863  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:51.221417  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:49.310000  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:49.809922  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.310214  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:50.809728  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.309659  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:51.809723  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.309837  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:52.809788  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.309655  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:53.809468  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.310103  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.810421  384505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 11:59:54.968150  384505 kubeadm.go:1081] duration metric: took 16.219921091s to wait for elevateKubeSystemPrivileges.
	I1002 11:59:54.968184  384505 kubeadm.go:406] StartCluster complete in 5m46.426951815s
	I1002 11:59:54.968203  384505 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.968302  384505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:59:54.970101  384505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 11:59:54.970429  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 11:59:54.970599  384505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 11:59:54.970672  384505 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970692  384505 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-749860"
	W1002 11:59:54.970703  384505 addons.go:240] addon storage-provisioner should already be in state true
	I1002 11:59:54.970723  384505 config.go:182] Loaded profile config "old-k8s-version-749860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 11:59:54.970753  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.970775  384505 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-749860"
	I1002 11:59:54.970792  384505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-749860"
	I1002 11:59:54.971196  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971204  384505 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-749860"
	I1002 11:59:54.971226  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971199  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971240  384505 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-749860"
	W1002 11:59:54.971251  384505 addons.go:240] addon metrics-server should already be in state true
	I1002 11:59:54.971281  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.971297  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.971669  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.971707  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.989112  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 11:59:54.989701  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.989819  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38163
	I1002 11:59:54.989971  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 11:59:54.990503  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990552  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:54.990574  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.990592  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.990975  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991042  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991062  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991094  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:54.991110  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:54.991327  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:54.991555  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.991596  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:54.992169  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992183  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:54.992197  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.992206  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:54.998018  384505 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-749860"
	W1002 11:59:54.998043  384505 addons.go:240] addon default-storageclass should already be in state true
	I1002 11:59:54.998067  384505 host.go:66] Checking if "old-k8s-version-749860" exists ...
	I1002 11:59:54.998716  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.003322  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.020037  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1002 11:59:55.020659  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.021292  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.021313  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.021707  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.021896  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.022155  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1002 11:59:55.022286  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I1002 11:59:55.022697  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024740  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.024793  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.024824  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.024839  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.027065  384505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 11:59:55.025237  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.025561  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.028415  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.028568  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 11:59:55.028579  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 11:59:55.028596  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.028867  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.029051  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.030397  384505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:59:55.030424  384505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:59:55.031461  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.033181  384505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 11:59:55.032032  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.032651  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.034670  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.034698  384505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.034703  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.034711  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 11:59:55.034727  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.034894  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.035089  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.035269  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.046534  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046573  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.046599  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.046629  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.046888  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.047102  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.047276  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.051887  384505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1002 11:59:55.052372  384505 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:59:55.052940  384505 main.go:141] libmachine: Using API Version  1
	I1002 11:59:55.052970  384505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:59:55.053349  384505 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:59:55.053558  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetState
	I1002 11:59:55.055503  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .DriverName
	I1002 11:59:55.055762  384505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.055780  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 11:59:55.055805  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHHostname
	I1002 11:59:55.062494  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062526  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:c3:b0", ip: ""} in network mk-old-k8s-version-749860: {Iface:virbr5 ExpiryTime:2023-10-02 12:53:51 +0000 UTC Type:0 Mac:52:54:00:d4:c3:b0 Iaid: IPaddr:192.168.83.82 Prefix:24 Hostname:old-k8s-version-749860 Clientid:01:52:54:00:d4:c3:b0}
	I1002 11:59:55.062542  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | domain old-k8s-version-749860 has defined IP address 192.168.83.82 and MAC address 52:54:00:d4:c3:b0 in network mk-old-k8s-version-749860
	I1002 11:59:55.062550  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHPort
	I1002 11:59:55.062752  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHKeyPath
	I1002 11:59:55.062922  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .GetSSHUsername
	I1002 11:59:55.063162  384505 sshutil.go:53] new ssh client: &{IP:192.168.83.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/old-k8s-version-749860/id_rsa Username:docker}
	I1002 11:59:55.103907  384505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-749860" context rescaled to 1 replicas
	I1002 11:59:55.103958  384505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.82 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 11:59:55.105626  384505 out.go:177] * Verifying Kubernetes components...
	I1002 11:59:53.722331  384344 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:54.914848  384344 pod_ready.go:81] duration metric: took 4m0.000973055s waiting for pod "metrics-server-57f55c9bc5-lrqt9" in "kube-system" namespace to be "Ready" ...
	E1002 11:59:54.914899  384344 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 11:59:54.914926  384344 pod_ready.go:38] duration metric: took 4m12.745047876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:54.914963  384344 kubeadm.go:640] restartCluster took 4m32.83554771s
	W1002 11:59:54.915062  384344 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 11:59:54.915098  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 11:59:55.106948  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:59:55.283274  384505 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.283336  384505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 11:59:55.291603  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 11:59:55.291629  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 11:59:55.297775  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 11:59:55.321901  384505 node_ready.go:49] node "old-k8s-version-749860" has status "Ready":"True"
	I1002 11:59:55.321927  384505 node_ready.go:38] duration metric: took 38.615436ms waiting for node "old-k8s-version-749860" to be "Ready" ...
	I1002 11:59:55.321939  384505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 11:59:55.327570  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 11:59:55.355612  384505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 11:59:55.357164  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 11:59:55.357187  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 11:59:55.423852  384505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:55.423883  384505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 11:59:55.477683  384505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 11:59:56.041846  384505 start.go:923] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1002 11:59:56.230394  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230432  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230466  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230488  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230810  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230869  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230888  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.230913  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.230936  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230890  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.230969  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.230990  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.231024  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.231326  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231341  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231652  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.231667  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.231740  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.327260  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.327289  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.327633  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.327654  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547462  384505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.069727635s)
	I1002 11:59:56.547536  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.547549  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.547901  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.547948  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.547974  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.547993  384505 main.go:141] libmachine: Making call to close driver server
	I1002 11:59:56.548010  384505 main.go:141] libmachine: (old-k8s-version-749860) Calling .Close
	I1002 11:59:56.548288  384505 main.go:141] libmachine: Successfully made call to close driver server
	I1002 11:59:56.548321  384505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 11:59:56.548322  384505 main.go:141] libmachine: (old-k8s-version-749860) DBG | Closing plugin on server side
	I1002 11:59:56.548333  384505 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-749860"
	I1002 11:59:56.550084  384505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 11:59:56.551798  384505 addons.go:502] enable addons completed in 1.581195105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 11:59:57.554993  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 11:59:59.933613  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:01.937565  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:04.431925  384505 pod_ready.go:102] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:05.433988  384505 pod_ready.go:92] pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.434013  384505 pod_ready.go:81] duration metric: took 10.078369703s waiting for pod "coredns-5644d7b6d9-7b9bb" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.434029  384505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441501  384505 pod_ready.go:92] pod "kube-proxy-mdtp5" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:05.441534  384505 pod_ready.go:81] duration metric: took 7.496823ms waiting for pod "kube-proxy-mdtp5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:05.441543  384505 pod_ready.go:38] duration metric: took 10.1195912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:05.441592  384505 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:05.441680  384505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:05.460054  384505 api_server.go:72] duration metric: took 10.356049869s to wait for apiserver process to appear ...
	I1002 12:00:05.460080  384505 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:05.460100  384505 api_server.go:253] Checking apiserver healthz at https://192.168.83.82:8443/healthz ...
	I1002 12:00:05.466796  384505 api_server.go:279] https://192.168.83.82:8443/healthz returned 200:
	ok
	I1002 12:00:05.467813  384505 api_server.go:141] control plane version: v1.16.0
	I1002 12:00:05.467845  384505 api_server.go:131] duration metric: took 7.75678ms to wait for apiserver health ...
	I1002 12:00:05.467855  384505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:05.472349  384505 system_pods.go:59] 4 kube-system pods found
	I1002 12:00:05.472384  384505 system_pods.go:61] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.472391  384505 system_pods.go:61] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.472401  384505 system_pods.go:61] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.472410  384505 system_pods.go:61] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.472433  384505 system_pods.go:74] duration metric: took 4.569442ms to wait for pod list to return data ...
	I1002 12:00:05.472446  384505 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:05.476327  384505 default_sa.go:45] found service account: "default"
	I1002 12:00:05.476349  384505 default_sa.go:55] duration metric: took 3.895344ms for default service account to be created ...
	I1002 12:00:05.476357  384505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:05.480522  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.480545  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.480550  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.480557  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.480563  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.480579  384505 retry.go:31] will retry after 270.891275ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:05.757515  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:05.757555  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:05.757563  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:05.757574  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:05.757585  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:05.757603  384505 retry.go:31] will retry after 336.725562ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.099945  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.099978  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.099985  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.099995  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.100002  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.100024  384505 retry.go:31] will retry after 389.53153ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.504317  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.504354  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.504362  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.504375  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.504385  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.504407  384505 retry.go:31] will retry after 453.465732ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:06.962509  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:06.962534  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:06.962539  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:06.962546  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:06.962552  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:06.962568  384505 retry.go:31] will retry after 489.820063ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:07.457422  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:07.457451  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:07.457456  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:07.457465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:07.457472  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:07.457490  384505 retry.go:31] will retry after 931.079053ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:08.394500  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:08.394527  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:08.394532  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:08.394538  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:08.394546  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:08.394562  384505 retry.go:31] will retry after 929.512162ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:09.216426  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.301296702s)
	I1002 12:00:09.216493  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:09.230712  384344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 12:00:09.239588  384344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 12:00:09.248624  384344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 12:00:09.248677  384344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 12:00:09.466935  384344 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 12:00:09.329677  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:09.329709  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:09.329714  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:09.329722  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:09.329728  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:09.329746  384505 retry.go:31] will retry after 898.08397ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:10.232119  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:10.232155  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:10.232163  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:10.232176  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:10.232185  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:10.232212  384505 retry.go:31] will retry after 1.809149678s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:12.047424  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:12.047452  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:12.047458  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:12.047465  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:12.047471  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:12.047487  384505 retry.go:31] will retry after 2.054960799s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:14.109048  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:14.109080  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:14.109088  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:14.109098  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:14.109108  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:14.109128  384505 retry.go:31] will retry after 2.523219254s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:16.640373  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:16.640399  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:16.640405  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:16.640412  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:16.640419  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:16.640436  384505 retry.go:31] will retry after 2.61022195s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:19.606412  384344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 12:00:19.606505  384344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 12:00:19.606620  384344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 12:00:19.606760  384344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 12:00:19.606856  384344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 12:00:19.606912  384344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 12:00:19.608541  384344 out.go:204]   - Generating certificates and keys ...
	I1002 12:00:19.608638  384344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 12:00:19.608743  384344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 12:00:19.608891  384344 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 12:00:19.608999  384344 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 12:00:19.609113  384344 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 12:00:19.609193  384344 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 12:00:19.609276  384344 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 12:00:19.609360  384344 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 12:00:19.609453  384344 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 12:00:19.609548  384344 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 12:00:19.609624  384344 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 12:00:19.609694  384344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 12:00:19.609761  384344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 12:00:19.609833  384344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 12:00:19.609916  384344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 12:00:19.609991  384344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 12:00:19.610100  384344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 12:00:19.610182  384344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 12:00:19.611696  384344 out.go:204]   - Booting up control plane ...
	I1002 12:00:19.611810  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 12:00:19.611916  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 12:00:19.612021  384344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 12:00:19.612173  384344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 12:00:19.612294  384344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 12:00:19.612346  384344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 12:00:19.612576  384344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 12:00:19.612683  384344 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 12:00:19.612825  384344 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 12:00:19.612943  384344 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 12:00:19.613026  384344 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 12:00:19.613215  384344 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-304121 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 12:00:19.613266  384344 kubeadm.go:322] [bootstrap-token] Using token: pd40pp.2tkeaw4x1d1qfkq9
	I1002 12:00:19.614472  384344 out.go:204]   - Configuring RBAC rules ...
	I1002 12:00:19.614593  384344 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 12:00:19.614706  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 12:00:19.614912  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 12:00:19.615054  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 12:00:19.615220  384344 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 12:00:19.615315  384344 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 12:00:19.615474  384344 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 12:00:19.615540  384344 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 12:00:19.615622  384344 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 12:00:19.615633  384344 kubeadm.go:322] 
	I1002 12:00:19.615725  384344 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 12:00:19.615747  384344 kubeadm.go:322] 
	I1002 12:00:19.615851  384344 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 12:00:19.615864  384344 kubeadm.go:322] 
	I1002 12:00:19.615894  384344 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 12:00:19.615997  384344 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 12:00:19.616084  384344 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 12:00:19.616094  384344 kubeadm.go:322] 
	I1002 12:00:19.616143  384344 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 12:00:19.616152  384344 kubeadm.go:322] 
	I1002 12:00:19.616222  384344 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 12:00:19.616240  384344 kubeadm.go:322] 
	I1002 12:00:19.616321  384344 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 12:00:19.616420  384344 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 12:00:19.616532  384344 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 12:00:19.616548  384344 kubeadm.go:322] 
	I1002 12:00:19.616640  384344 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 12:00:19.616734  384344 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 12:00:19.616743  384344 kubeadm.go:322] 
	I1002 12:00:19.616857  384344 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617005  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca \
	I1002 12:00:19.617049  384344 kubeadm.go:322] 	--control-plane 
	I1002 12:00:19.617059  384344 kubeadm.go:322] 
	I1002 12:00:19.617136  384344 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 12:00:19.617142  384344 kubeadm.go:322] 
	I1002 12:00:19.617238  384344 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pd40pp.2tkeaw4x1d1qfkq9 \
	I1002 12:00:19.617333  384344 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:df1f40e72a6c06e816811c6310d0fc881cd171ff228c69c13ce97c55f960aeca 
	I1002 12:00:19.617371  384344 cni.go:84] Creating CNI manager for ""
	I1002 12:00:19.617384  384344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 12:00:19.618962  384344 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 12:00:19.620215  384344 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 12:00:19.650698  384344 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 12:00:19.699458  384344 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 12:00:19.699594  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=no-preload-304121 minikube.k8s.io/updated_at=2023_10_02T12_00_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.699598  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.810984  384344 ops.go:34] apiserver oom_adj: -16
	I1002 12:00:20.114460  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.245669  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:20.876563  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:19.256294  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:19.256319  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:19.256325  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:19.256332  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:19.256338  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:19.256355  384505 retry.go:31] will retry after 3.270215577s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:22.532684  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:22.532714  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:22.532723  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:22.532730  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:22.532737  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:22.532754  384505 retry.go:31] will retry after 5.273561216s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:21.376620  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:21.876453  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.376537  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:22.876967  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.377242  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:23.876469  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.376391  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:24.877422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.376422  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:25.877251  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.810777  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:27.810810  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:27.810816  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:27.810822  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:27.810828  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:27.810845  384505 retry.go:31] will retry after 6.34425242s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:26.376388  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:26.877267  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.376480  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:27.877214  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.376560  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:28.876964  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.377314  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:29.877135  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.377301  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:30.876525  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.376660  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:31.876991  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.376934  384344 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 12:00:32.584774  384344 kubeadm.go:1081] duration metric: took 12.88524826s to wait for elevateKubeSystemPrivileges.
	I1002 12:00:32.584821  384344 kubeadm.go:406] StartCluster complete in 5m10.55691254s
	I1002 12:00:32.584849  384344 settings.go:142] acquiring lock: {Name:mk76c4023d5e9dc9b7da31a8dc5e0744473ad8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.584955  384344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 12:00:32.587722  384344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/kubeconfig: {Name:mk86aae4de5481537c68efc6a006641ee62c4137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 12:00:32.588018  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 12:00:32.588146  384344 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 12:00:32.588230  384344 addons.go:69] Setting default-storageclass=true in profile "no-preload-304121"
	I1002 12:00:32.588251  384344 addons.go:69] Setting metrics-server=true in profile "no-preload-304121"
	I1002 12:00:32.588265  384344 addons.go:231] Setting addon metrics-server=true in "no-preload-304121"
	W1002 12:00:32.588273  384344 addons.go:240] addon metrics-server should already be in state true
	I1002 12:00:32.588252  384344 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-304121"
	I1002 12:00:32.588323  384344 config.go:182] Loaded profile config "no-preload-304121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 12:00:32.588333  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588229  384344 addons.go:69] Setting storage-provisioner=true in profile "no-preload-304121"
	I1002 12:00:32.588387  384344 addons.go:231] Setting addon storage-provisioner=true in "no-preload-304121"
	W1002 12:00:32.588397  384344 addons.go:240] addon storage-provisioner should already be in state true
	I1002 12:00:32.588433  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.588695  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588731  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588737  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588777  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.588867  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.588891  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.612093  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I1002 12:00:32.612118  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I1002 12:00:32.612252  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I1002 12:00:32.612652  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612799  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.612847  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.613307  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613337  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613432  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613504  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613715  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.613718  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.613838  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.613955  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.614146  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614197  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.614802  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.614842  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.615497  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.615534  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.617844  384344 addons.go:231] Setting addon default-storageclass=true in "no-preload-304121"
	W1002 12:00:32.617884  384344 addons.go:240] addon default-storageclass should already be in state true
	I1002 12:00:32.617914  384344 host.go:66] Checking if "no-preload-304121" exists ...
	I1002 12:00:32.618326  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.618436  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.634123  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I1002 12:00:32.634849  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.634953  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 12:00:32.635328  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.635470  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635495  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635819  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.635841  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.635867  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636193  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.636340  384344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 12:00:32.636373  384344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 12:00:32.636435  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.637717  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1002 12:00:32.638051  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.640160  384344 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 12:00:32.642288  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 12:00:32.642300  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 12:00:32.642314  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.640240  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.642837  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.642863  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.643527  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.643695  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.645514  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.645565  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.648157  384344 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 12:00:32.645977  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.646152  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.650297  384344 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.650313  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 12:00:32.650328  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.650380  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.650547  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.650823  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.650961  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.653953  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654560  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.654592  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.654886  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.655049  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.655195  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.655410  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.658005  384344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I1002 12:00:32.658525  384344 main.go:141] libmachine: () Calling .GetVersion
	I1002 12:00:32.659046  384344 main.go:141] libmachine: Using API Version  1
	I1002 12:00:32.659059  384344 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 12:00:32.659478  384344 main.go:141] libmachine: () Calling .GetMachineName
	I1002 12:00:32.659611  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetState
	I1002 12:00:32.661708  384344 main.go:141] libmachine: (no-preload-304121) Calling .DriverName
	I1002 12:00:32.661982  384344 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:32.661998  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 12:00:32.662018  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHHostname
	I1002 12:00:32.664637  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665005  384344 main.go:141] libmachine: (no-preload-304121) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:b9:ea", ip: ""} in network mk-no-preload-304121: {Iface:virbr1 ExpiryTime:2023-10-02 12:54:53 +0000 UTC Type:0 Mac:52:54:00:11:b9:ea Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:no-preload-304121 Clientid:01:52:54:00:11:b9:ea}
	I1002 12:00:32.665023  384344 main.go:141] libmachine: (no-preload-304121) DBG | domain no-preload-304121 has defined IP address 192.168.39.143 and MAC address 52:54:00:11:b9:ea in network mk-no-preload-304121
	I1002 12:00:32.665161  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHPort
	I1002 12:00:32.665335  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHKeyPath
	I1002 12:00:32.665426  384344 main.go:141] libmachine: (no-preload-304121) Calling .GetSSHUsername
	I1002 12:00:32.665610  384344 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/no-preload-304121/id_rsa Username:docker}
	I1002 12:00:32.723429  384344 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-304121" context rescaled to 1 replicas
	I1002 12:00:32.723469  384344 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 12:00:32.725329  384344 out.go:177] * Verifying Kubernetes components...
	I1002 12:00:32.726924  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:32.860425  384344 node_ready.go:35] waiting up to 6m0s for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.860515  384344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 12:00:32.904658  384344 node_ready.go:49] node "no-preload-304121" has status "Ready":"True"
	I1002 12:00:32.904689  384344 node_ready.go:38] duration metric: took 44.230643ms waiting for node "no-preload-304121" to be "Ready" ...
	I1002 12:00:32.904705  384344 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:32.949887  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:32.984050  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 12:00:32.997841  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 12:00:32.997869  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 12:00:32.999235  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 12:00:33.082015  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 12:00:33.082051  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 12:00:33.326524  384344 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:33.326554  384344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 12:00:33.403533  384344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 12:00:34.844716  384344 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984135314s)
	I1002 12:00:34.844752  384344 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 12:00:35.114639  384344 pod_ready.go:102] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"False"
	I1002 12:00:35.538571  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.55447937s)
	I1002 12:00:35.538624  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538641  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.538652  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.539381648s)
	I1002 12:00:35.538700  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.538713  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539005  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539027  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539039  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539049  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539137  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539162  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539176  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539194  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.539203  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.539299  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539328  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539341  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.539537  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.539588  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.539622  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.596015  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.596048  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.596384  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.596431  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.596449  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.641915  384344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.238327482s)
	I1002 12:00:35.641985  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642007  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642363  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642389  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642399  384344 main.go:141] libmachine: Making call to close driver server
	I1002 12:00:35.642409  384344 main.go:141] libmachine: (no-preload-304121) Calling .Close
	I1002 12:00:35.642423  384344 main.go:141] libmachine: (no-preload-304121) DBG | Closing plugin on server side
	I1002 12:00:35.642716  384344 main.go:141] libmachine: Successfully made call to close driver server
	I1002 12:00:35.642739  384344 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 12:00:35.642750  384344 addons.go:467] Verifying addon metrics-server=true in "no-preload-304121"
	I1002 12:00:35.644696  384344 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1002 12:00:35.646046  384344 addons.go:502] enable addons completed in 3.05790546s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1002 12:00:36.113386  384344 pod_ready.go:92] pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.113415  384344 pod_ready.go:81] duration metric: took 3.163496821s waiting for pod "coredns-5dd5756b68-st2bd" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.113429  384344 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.116264  384344 pod_ready.go:97] error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116290  384344 pod_ready.go:81] duration metric: took 2.85415ms waiting for pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace to be "Ready" ...
	E1002 12:00:36.116300  384344 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-zcnv5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-zcnv5" not found
	I1002 12:00:36.116306  384344 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126555  384344 pod_ready.go:92] pod "etcd-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.126575  384344 pod_ready.go:81] duration metric: took 10.262082ms waiting for pod "etcd-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.126583  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137876  384344 pod_ready.go:92] pod "kube-apiserver-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.137903  384344 pod_ready.go:81] duration metric: took 11.312511ms waiting for pod "kube-apiserver-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.137916  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146526  384344 pod_ready.go:92] pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.146549  384344 pod_ready.go:81] duration metric: took 8.624341ms waiting for pod "kube-controller-manager-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.146561  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307205  384344 pod_ready.go:92] pod "kube-proxy-sprhm" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.307231  384344 pod_ready.go:81] duration metric: took 160.663088ms waiting for pod "kube-proxy-sprhm" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.307241  384344 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707429  384344 pod_ready.go:92] pod "kube-scheduler-no-preload-304121" in "kube-system" namespace has status "Ready":"True"
	I1002 12:00:36.707455  384344 pod_ready.go:81] duration metric: took 400.207608ms waiting for pod "kube-scheduler-no-preload-304121" in "kube-system" namespace to be "Ready" ...
	I1002 12:00:36.707463  384344 pod_ready.go:38] duration metric: took 3.802745796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 12:00:36.707480  384344 api_server.go:52] waiting for apiserver process to appear ...
	I1002 12:00:36.707537  384344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 12:00:36.733934  384344 api_server.go:72] duration metric: took 4.010431274s to wait for apiserver process to appear ...
	I1002 12:00:36.733962  384344 api_server.go:88] waiting for apiserver healthz status ...
	I1002 12:00:36.733979  384344 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1002 12:00:36.740562  384344 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1002 12:00:36.742234  384344 api_server.go:141] control plane version: v1.28.2
	I1002 12:00:36.742259  384344 api_server.go:131] duration metric: took 8.289515ms to wait for apiserver health ...
	I1002 12:00:36.742270  384344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 12:00:36.910934  384344 system_pods.go:59] 8 kube-system pods found
	I1002 12:00:36.910962  384344 system_pods.go:61] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:36.910967  384344 system_pods.go:61] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:36.910971  384344 system_pods.go:61] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:36.910976  384344 system_pods.go:61] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:36.910980  384344 system_pods.go:61] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:36.910983  384344 system_pods.go:61] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:36.910991  384344 system_pods.go:61] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:36.911002  384344 system_pods.go:61] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 12:00:36.911013  384344 system_pods.go:74] duration metric: took 168.734676ms to wait for pod list to return data ...
	I1002 12:00:36.911027  384344 default_sa.go:34] waiting for default service account to be created ...
	I1002 12:00:37.106994  384344 default_sa.go:45] found service account: "default"
	I1002 12:00:37.107038  384344 default_sa.go:55] duration metric: took 196.001935ms for default service account to be created ...
	I1002 12:00:37.107050  384344 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 12:00:37.310973  384344 system_pods.go:86] 8 kube-system pods found
	I1002 12:00:37.311012  384344 system_pods.go:89] "coredns-5dd5756b68-st2bd" [6623fa3f-9a60-4364-bf08-7e84ae35d4b6] Running
	I1002 12:00:37.311021  384344 system_pods.go:89] "etcd-no-preload-304121" [f0a08dd5-ccdd-44a8-8d0a-ba5f617db7e0] Running
	I1002 12:00:37.311028  384344 system_pods.go:89] "kube-apiserver-no-preload-304121" [2e0d2991-fec5-44b4-8bb2-70206956c983] Running
	I1002 12:00:37.311034  384344 system_pods.go:89] "kube-controller-manager-no-preload-304121" [51031981-2958-4947-8d10-59a15a77ec1b] Running
	I1002 12:00:37.311041  384344 system_pods.go:89] "kube-proxy-sprhm" [d032413b-07c5-4478-bbdf-93383f85f73d] Running
	I1002 12:00:37.311049  384344 system_pods.go:89] "kube-scheduler-no-preload-304121" [f825ba3f-3bca-40ed-a5db-d3a3fc8b0751] Running
	I1002 12:00:37.311060  384344 system_pods.go:89] "metrics-server-57f55c9bc5-6c2hc" [020790e8-555b-4455-8e82-6ea49bb4212a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:37.311075  384344 system_pods.go:89] "storage-provisioner" [9c5b5a2d-e464-477e-9b5c-bf830ee9c640] Running
	I1002 12:00:37.311093  384344 system_pods.go:126] duration metric: took 204.035391ms to wait for k8s-apps to be running ...
	I1002 12:00:37.311103  384344 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:00:37.311158  384344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:00:37.327711  384344 system_svc.go:56] duration metric: took 16.597865ms WaitForService to wait for kubelet.
	I1002 12:00:37.327736  384344 kubeadm.go:581] duration metric: took 4.604243467s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:00:37.327758  384344 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:00:37.506633  384344 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:00:37.506693  384344 node_conditions.go:123] node cpu capacity is 2
	I1002 12:00:37.506708  384344 node_conditions.go:105] duration metric: took 178.94359ms to run NodePressure ...
	I1002 12:00:37.506722  384344 start.go:228] waiting for startup goroutines ...
	I1002 12:00:37.506728  384344 start.go:233] waiting for cluster config update ...
	I1002 12:00:37.506738  384344 start.go:242] writing updated cluster config ...
	I1002 12:00:37.506999  384344 ssh_runner.go:195] Run: rm -f paused
	I1002 12:00:37.558171  384344 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 12:00:37.560280  384344 out.go:177] * Done! kubectl is now configured to use "no-preload-304121" cluster and "default" namespace by default
	I1002 12:00:34.160478  384505 system_pods.go:86] 4 kube-system pods found
	I1002 12:00:34.160520  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:34.160528  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:34.160540  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:34.160553  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:34.160577  384505 retry.go:31] will retry after 8.056057378s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:42.223209  384505 system_pods.go:86] 5 kube-system pods found
	I1002 12:00:42.223242  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:42.223251  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Pending
	I1002 12:00:42.223257  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:42.223267  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:42.223276  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:42.223299  384505 retry.go:31] will retry after 9.279474557s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:00:51.510907  384505 system_pods.go:86] 6 kube-system pods found
	I1002 12:00:51.510937  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:00:51.510945  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:00:51.510949  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Pending
	I1002 12:00:51.510953  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:00:51.510959  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:00:51.510965  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:00:51.510995  384505 retry.go:31] will retry after 9.19295244s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 12:01:00.712167  384505 system_pods.go:86] 8 kube-system pods found
	I1002 12:01:00.712195  384505 system_pods.go:89] "coredns-5644d7b6d9-7b9bb" [b437c1ff-d7c0-4708-b799-e4ca54bd00cc] Running
	I1002 12:01:00.712201  384505 system_pods.go:89] "etcd-old-k8s-version-749860" [893d0bc0-cefb-48d6-82d8-6d6804184a67] Running
	I1002 12:01:00.712205  384505 system_pods.go:89] "kube-apiserver-old-k8s-version-749860" [41854b6e-d738-4af3-9734-8133b2a299df] Running
	I1002 12:01:00.712209  384505 system_pods.go:89] "kube-controller-manager-old-k8s-version-749860" [1531e118-f1f1-485e-b258-32e21b3385d8] Running
	I1002 12:01:00.712213  384505 system_pods.go:89] "kube-proxy-mdtp5" [e7e09a24-84ff-4480-b1e4-39273ef37086] Running
	I1002 12:01:00.712217  384505 system_pods.go:89] "kube-scheduler-old-k8s-version-749860" [66983e5c-64ab-48ec-9c24-824f0a7cb36e] Running
	I1002 12:01:00.712223  384505 system_pods.go:89] "metrics-server-74d5856cc6-n7z95" [8ced0464-64fb-40b5-bd97-0c7b8b9bebc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 12:01:00.712230  384505 system_pods.go:89] "storage-provisioner" [893fe40b-d1d7-4569-8d99-85038005f53a] Running
	I1002 12:01:00.712237  384505 system_pods.go:126] duration metric: took 55.235875161s to wait for k8s-apps to be running ...
	I1002 12:01:00.712244  384505 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 12:01:00.712293  384505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 12:01:00.728970  384505 system_svc.go:56] duration metric: took 16.712185ms WaitForService to wait for kubelet.
	I1002 12:01:00.728999  384505 kubeadm.go:581] duration metric: took 1m5.625005524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 12:01:00.729026  384505 node_conditions.go:102] verifying NodePressure condition ...
	I1002 12:01:00.733153  384505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 12:01:00.733180  384505 node_conditions.go:123] node cpu capacity is 2
	I1002 12:01:00.733196  384505 node_conditions.go:105] duration metric: took 4.162147ms to run NodePressure ...
	I1002 12:01:00.733209  384505 start.go:228] waiting for startup goroutines ...
	I1002 12:01:00.733216  384505 start.go:233] waiting for cluster config update ...
	I1002 12:01:00.733230  384505 start.go:242] writing updated cluster config ...
	I1002 12:01:00.733553  384505 ssh_runner.go:195] Run: rm -f paused
	I1002 12:01:00.784237  384505 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 12:01:00.786178  384505 out.go:177] 
	W1002 12:01:00.787686  384505 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 12:01:00.789104  384505 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 12:01:00.790521  384505 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-749860" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-10-02 11:53:50 UTC, ends at Mon 2023-10-02 12:14:04 UTC. --
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.470802667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248844470785671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c4a0f736-07e4-4c49-be3f-b4f2fc945664 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.471530850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7ee218e2-cc5f-45fb-aa9c-e48055e45eb7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.471655238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7ee218e2-cc5f-45fb-aa9c-e48055e45eb7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.471846050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7ee218e2-cc5f-45fb-aa9c-e48055e45eb7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.514962550Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4ce64217-f0fb-48d6-a665-2a263bc00550 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.515050650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4ce64217-f0fb-48d6-a665-2a263bc00550 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.516118582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5c49ee19-2cbb-413f-9231-0f120f992d4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.516502777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248844516490892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=5c49ee19-2cbb-413f-9231-0f120f992d4d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.517294396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=88655433-e660-4799-89ff-ccd4001cab7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.517367743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=88655433-e660-4799-89ff-ccd4001cab7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.517567035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=88655433-e660-4799-89ff-ccd4001cab7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.559061938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3680f4ec-a989-40fd-af19-a4318b8aaad4 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.559147839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3680f4ec-a989-40fd-af19-a4318b8aaad4 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.560420820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a100417c-b96c-4709-af01-97b3ff7ca991 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.561001748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248844560986315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a100417c-b96c-4709-af01-97b3ff7ca991 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.561458139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb18645f-2cc8-4990-83ff-bf79495c19f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.561506157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb18645f-2cc8-4990-83ff-bf79495c19f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.561707391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb18645f-2cc8-4990-83ff-bf79495c19f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.601350292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7d406037-8af4-4977-8598-ddcec243fc99 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.601477582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7d406037-8af4-4977-8598-ddcec243fc99 name=/runtime.v1.RuntimeService/Version
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.603216587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=448ac4d8-5e53-4edc-bd9c-4a3555350531 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.603825372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696248844603807277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=448ac4d8-5e53-4edc-bd9c-4a3555350531 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.604567740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18adc6cf-15f8-4e0e-ac15-66e876044edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.604684416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18adc6cf-15f8-4e0e-ac15-66e876044edd name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 12:14:04 old-k8s-version-749860 crio[715]: time="2023-10-02 12:14:04.604901524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d,PodSandboxId:c387574f801288e1a08cb1a6f4badabfbe4bc9cfe76e3ce1b94db5014c72045d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696247997721919038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 893fe40b-d1d7-4569-8d99-85038005f53a,},Annotations:map[string]string{io.kubernetes.container.hash: f447fab5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a,PodSandboxId:63ed4ec3fc8a3e595cf3ab2f1bf8f972e2319f48bfa70fc828da0dc1514a59eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696247997432387507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7b9bb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b437c1ff-d7c0-4708-b799-e4ca54bd00cc,},Annotations:map[string]string{io.kubernetes.container.hash: c65aea65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855,PodSandboxId:9cb28fe0eb66b72b0e737eb51d13adcbd75d9ed5c1bb6649e6655c1d0b6236b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696247996878236072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdtp5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e09
a24-84ff-4480-b1e4-39273ef37086,},Annotations:map[string]string{io.kubernetes.container.hash: cb8c26f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5,PodSandboxId:85f6c5d981253e9ec12d99b412af9f4331190426377c28862dda620fc39fae01,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696247969450661547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1503f30103b4107023c1689b533624,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 4ae11ec6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474,PodSandboxId:82a27613594c0acfa17eef576d4b90686ee3172a10b88c7177b04152807fc7c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696247968045343775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404,PodSandboxId:65e57f7a352a9f1c034ca87d29fedab7c6c00c2e7abd2dae7499329df732e12d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696247967865800453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2034f979f948c1a718b43753d97a5ead,},Annotations:map[string]string{io.kubern
etes.container.hash: 33bf34cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8,PodSandboxId:723ffac35945d053d077481404d120373073043d8935ecef90f350f4a994a889,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696247967587403016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-749860,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=18adc6cf-15f8-4e0e-ac15-66e876044edd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	039738890c0c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c387574f80128       storage-provisioner
	0bbbaa70397b8       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   63ed4ec3fc8a3       coredns-5644d7b6d9-7b9bb
	92b70651bc8fd       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   9cb28fe0eb66b       kube-proxy-mdtp5
	9171c6defa67e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   85f6c5d981253       etcd-old-k8s-version-749860
	27d21512e1f35       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   82a27613594c0       kube-scheduler-old-k8s-version-749860
	8394cafbfead7       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            0                   65e57f7a352a9       kube-apiserver-old-k8s-version-749860
	4799bd0c57b13       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   723ffac35945d       kube-controller-manager-old-k8s-version-749860
	
	* 
	* ==> coredns [0bbbaa70397b802c00dcdcc10194a8a68b8409c0fbb5d0a3272f4eab93a5803a] <==
	* .:53
	2023-10-02T11:59:57.723Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	2023-10-02T11:59:57.724Z [INFO] CoreDNS-1.6.2
	2023-10-02T11:59:57.724Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-02T11:59:58.743Z [INFO] 127.0.0.1:44886 - 61132 "HINFO IN 8385809371994932739.3761755439345032964. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022856664s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-749860
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-749860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=old-k8s-version-749860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T11_59_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:59:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 12:13:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 12:13:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 12:13:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 12:13:34 +0000   Mon, 02 Oct 2023 11:59:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.82
	  Hostname:    old-k8s-version-749860
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 be9b48f6bc7c4943a52c7e86d3eca20b
	 System UUID:                be9b48f6-bc7c-4943-a52c-7e86d3eca20b
	 Boot ID:                    fe9fde7a-fce1-478f-bc16-9c4054693c03
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-7b9bb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-749860                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-749860             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-749860    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-mdtp5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-749860             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-n7z95                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet, old-k8s-version-749860     Node old-k8s-version-749860 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-749860  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 2 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070695] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.325492] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.441878] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153738] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.448486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000051] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.315526] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.131775] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.163930] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.128336] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.236769] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Oct 2 11:54] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.476621] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +21.474357] hrtimer: interrupt took 4032902 ns
	[  +3.948944] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.083454] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 2 11:59] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +0.803214] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 2 12:00] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [9171c6defa67e303dc9b30efbd23f4378fe4dd579bbcae014d7f44068b4eaab5] <==
	* 2023-10-02 11:59:29.601856 I | raft: 8f4fcab0df4f7c44 became follower at term 0
	2023-10-02 11:59:29.601877 I | raft: newRaft 8f4fcab0df4f7c44 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-02 11:59:29.601892 I | raft: 8f4fcab0df4f7c44 became follower at term 1
	2023-10-02 11:59:29.612024 W | auth: simple token is not cryptographically signed
	2023-10-02 11:59:29.617130 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-02 11:59:29.619007 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 11:59:29.619274 I | embed: listening for metrics on http://192.168.83.82:2381
	2023-10-02 11:59:29.619543 I | etcdserver: 8f4fcab0df4f7c44 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 11:59:29.620083 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-02 11:59:29.620346 I | etcdserver/membership: added member 8f4fcab0df4f7c44 [https://192.168.83.82:2380] to cluster cf7ed821fb17c7fa
	2023-10-02 11:59:30.302490 I | raft: 8f4fcab0df4f7c44 is starting a new election at term 1
	2023-10-02 11:59:30.302715 I | raft: 8f4fcab0df4f7c44 became candidate at term 2
	2023-10-02 11:59:30.302913 I | raft: 8f4fcab0df4f7c44 received MsgVoteResp from 8f4fcab0df4f7c44 at term 2
	2023-10-02 11:59:30.302952 I | raft: 8f4fcab0df4f7c44 became leader at term 2
	2023-10-02 11:59:30.303074 I | raft: raft.node: 8f4fcab0df4f7c44 elected leader 8f4fcab0df4f7c44 at term 2
	2023-10-02 11:59:30.303439 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-02 11:59:30.305023 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-02 11:59:30.305268 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-02 11:59:30.305915 I | etcdserver: published {Name:old-k8s-version-749860 ClientURLs:[https://192.168.83.82:2379]} to cluster cf7ed821fb17c7fa
	2023-10-02 11:59:30.306042 I | embed: ready to serve client requests
	2023-10-02 11:59:30.306278 I | embed: ready to serve client requests
	2023-10-02 11:59:30.307496 I | embed: serving client requests on 192.168.83.82:2379
	2023-10-02 11:59:30.309285 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 12:09:30.335675 I | mvcc: store.index: compact 663
	2023-10-02 12:09:30.338005 I | mvcc: finished scheduled compaction at 663 (took 1.703277ms)
	
	* 
	* ==> kernel <==
	*  12:14:04 up 20 min,  0 users,  load average: 0.18, 0.21, 0.24
	Linux old-k8s-version-749860 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8394cafbfead7293254874e819f893bca2ea8aaaff9ae6292b622e35df660404] <==
	* I1002 12:05:34.783766       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:05:34.783945       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:05:34.784028       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:05:34.784038       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:07:34.784727       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:07:34.785393       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:07:34.785839       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:07:34.785993       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:09:34.786147       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:09:34.786764       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:09:34.786913       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:09:34.786995       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:10:34.787473       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:10:34.787936       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:10:34.788003       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:10:34.788041       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 12:12:34.788330       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 12:12:34.788441       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 12:12:34.788504       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 12:12:34.788515       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4799bd0c57b132de5e00957eeb4c1380b4319d3c7cbd8a43e78aba49f7be28f8] <==
	* W1002 12:07:55.224387       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:07:59.109666       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:08:27.226866       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:08:29.361899       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:08:59.228835       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:08:59.613794       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1002 12:09:29.865813       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:09:31.231150       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:10:00.117937       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:10:03.233333       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:10:30.373543       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:10:35.235735       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:11:00.625740       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:11:07.237801       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:11:30.877911       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:11:39.239956       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:12:01.130311       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:12:11.242129       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:12:31.382533       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:12:43.244148       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:13:01.634739       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:13:15.246175       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:13:31.886792       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 12:13:47.248270       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 12:14:02.138876       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [92b70651bc8fdda674d4dfa238bc0d51561a6a7d56207488bdb638acda4bc855] <==
	* W1002 11:59:57.693438       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1002 11:59:57.718970       1 node.go:135] Successfully retrieved node IP: 192.168.83.82
	I1002 11:59:57.719042       1 server_others.go:149] Using iptables Proxier.
	I1002 11:59:57.731898       1 server.go:529] Version: v1.16.0
	I1002 11:59:57.732895       1 config.go:131] Starting endpoints config controller
	I1002 11:59:57.732953       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1002 11:59:57.732985       1 config.go:313] Starting service config controller
	I1002 11:59:57.733004       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1002 11:59:57.841095       1 shared_informer.go:204] Caches are synced for service config 
	I1002 11:59:57.841261       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [27d21512e1f3511763fa7ac50dfb88800acac5e24ee79277815e016946c18474] <==
	* I1002 11:59:33.803312       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1002 11:59:33.803755       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1002 11:59:33.853523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:33.853743       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:59:33.853879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:33.853954       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:59:33.854321       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:33.854377       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:33.854412       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:33.854446       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:59:33.854493       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:33.856067       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:33.856175       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:34.855834       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:59:34.857659       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:59:34.859093       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:34.860530       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:59:34.863260       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:59:34.864881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:59:34.865077       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:59:34.866198       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:59:34.866760       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:59:34.867744       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:59:34.869050       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:59:54.779168       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:53:50 UTC, ends at Mon 2023-10-02 12:14:05 UTC. --
	Oct 02 12:09:26 old-k8s-version-749860 kubelet[3187]: E1002 12:09:26.413860    3187 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 02 12:09:35 old-k8s-version-749860 kubelet[3187]: E1002 12:09:35.267551    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:09:50 old-k8s-version-749860 kubelet[3187]: E1002 12:09:50.267270    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:10:01 old-k8s-version-749860 kubelet[3187]: E1002 12:10:01.267652    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:10:14 old-k8s-version-749860 kubelet[3187]: E1002 12:10:14.270146    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:10:29 old-k8s-version-749860 kubelet[3187]: E1002 12:10:29.267733    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:10:44 old-k8s-version-749860 kubelet[3187]: E1002 12:10:44.282934    3187 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:10:44 old-k8s-version-749860 kubelet[3187]: E1002 12:10:44.283051    3187 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:10:44 old-k8s-version-749860 kubelet[3187]: E1002 12:10:44.283106    3187 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 02 12:10:44 old-k8s-version-749860 kubelet[3187]: E1002 12:10:44.283134    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 02 12:10:56 old-k8s-version-749860 kubelet[3187]: E1002 12:10:56.267743    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:11:11 old-k8s-version-749860 kubelet[3187]: E1002 12:11:11.267052    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:11:24 old-k8s-version-749860 kubelet[3187]: E1002 12:11:24.267428    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:11:38 old-k8s-version-749860 kubelet[3187]: E1002 12:11:38.267226    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:11:52 old-k8s-version-749860 kubelet[3187]: E1002 12:11:52.267345    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:12:05 old-k8s-version-749860 kubelet[3187]: E1002 12:12:05.267809    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:12:20 old-k8s-version-749860 kubelet[3187]: E1002 12:12:20.268740    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:12:32 old-k8s-version-749860 kubelet[3187]: E1002 12:12:32.268080    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:12:45 old-k8s-version-749860 kubelet[3187]: E1002 12:12:45.267100    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:12:56 old-k8s-version-749860 kubelet[3187]: E1002 12:12:56.267508    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:13:08 old-k8s-version-749860 kubelet[3187]: E1002 12:13:08.267701    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:13:22 old-k8s-version-749860 kubelet[3187]: E1002 12:13:22.267282    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:13:36 old-k8s-version-749860 kubelet[3187]: E1002 12:13:36.267818    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:13:48 old-k8s-version-749860 kubelet[3187]: E1002 12:13:48.267681    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 12:14:01 old-k8s-version-749860 kubelet[3187]: E1002 12:14:01.267373    3187 pod_workers.go:191] Error syncing pod 8ced0464-64fb-40b5-bd97-0c7b8b9bebc2 ("metrics-server-74d5856cc6-n7z95_kube-system(8ced0464-64fb-40b5-bd97-0c7b8b9bebc2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [039738890c0c0c918dfb2589cc51cc129d9f7a885474027049d11260e016669d] <==
	* I1002 11:59:57.862445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:59:57.888114       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:59:57.888234       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:59:57.901481       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:59:57.904920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540!
	I1002 11:59:57.906017       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"67313b9b-3b30-4b05-a538-a4ddd4744015", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540 became leader
	I1002 11:59:58.006001       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-749860_384a1e12-a88f-47f3-bac3-1cb79a4b9540!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-749860 -n old-k8s-version-749860
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-749860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-n7z95
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95: exit status 1 (70.612581ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-n7z95" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-749860 describe pod metrics-server-74d5856cc6-n7z95: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (241.84s)

                                                
                                    

Test pass (225/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 25.38
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.2/json-events 14.91
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.87
20 TestOffline 135.08
22 TestAddons/Setup 153.42
24 TestAddons/parallel/Registry 20.57
26 TestAddons/parallel/InspektorGadget 10.86
27 TestAddons/parallel/MetricsServer 6.09
28 TestAddons/parallel/HelmTiller 13.45
30 TestAddons/parallel/CSI 77.7
31 TestAddons/parallel/Headlamp 19.51
32 TestAddons/parallel/CloudSpanner 5.77
33 TestAddons/parallel/LocalPath 62.2
36 TestAddons/serial/GCPAuth/Namespaces 0.13
38 TestCertOptions 84.11
39 TestCertExpiration 478.88
41 TestForceSystemdFlag 133.68
42 TestForceSystemdEnv 50.55
44 TestKVMDriverInstallOrUpdate 2.94
48 TestErrorSpam/setup 44.91
49 TestErrorSpam/start 0.33
50 TestErrorSpam/status 0.72
51 TestErrorSpam/pause 1.58
52 TestErrorSpam/unpause 1.67
53 TestErrorSpam/stop 2.2
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 66
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 73.05
60 TestFunctional/serial/KubeContext 0.05
61 TestFunctional/serial/KubectlGetPods 0.08
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
65 TestFunctional/serial/CacheCmd/cache/add_local 2.2
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
67 TestFunctional/serial/CacheCmd/cache/list 0.04
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
70 TestFunctional/serial/CacheCmd/cache/delete 0.08
71 TestFunctional/serial/MinikubeKubectlCmd 0.1
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 37.18
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.58
76 TestFunctional/serial/LogsFileCmd 1.51
77 TestFunctional/serial/InvalidService 4.57
79 TestFunctional/parallel/ConfigCmd 0.31
80 TestFunctional/parallel/DashboardCmd 21.84
81 TestFunctional/parallel/DryRun 0.27
82 TestFunctional/parallel/InternationalLanguage 0.13
83 TestFunctional/parallel/StatusCmd 1.4
87 TestFunctional/parallel/ServiceCmdConnect 12.68
88 TestFunctional/parallel/AddonsCmd 0.11
89 TestFunctional/parallel/PersistentVolumeClaim 60.81
91 TestFunctional/parallel/SSHCmd 0.48
92 TestFunctional/parallel/CpCmd 0.89
93 TestFunctional/parallel/MySQL 34.79
94 TestFunctional/parallel/FileSync 0.22
95 TestFunctional/parallel/CertSync 1.71
99 TestFunctional/parallel/NodeLabels 0.08
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
103 TestFunctional/parallel/License 0.61
104 TestFunctional/parallel/ServiceCmd/DeployApp 11.27
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.92
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.96
121 TestFunctional/parallel/ImageCommands/Setup 1.99
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.96
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.51
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.72
125 TestFunctional/parallel/ServiceCmd/List 0.32
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
128 TestFunctional/parallel/ServiceCmd/Format 0.42
129 TestFunctional/parallel/ServiceCmd/URL 0.46
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
131 TestFunctional/parallel/ProfileCmd/profile_list 0.34
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
133 TestFunctional/parallel/MountCmd/any-port 12.34
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.58
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.34
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.37
141 TestFunctional/parallel/MountCmd/specific-port 1.73
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.02
145 TestFunctional/delete_minikube_cached_images 0.02
149 TestIngressAddonLegacy/StartLegacyK8sCluster 80.28
151 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.47
152 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
156 TestJSONOutput/start/Command 109.96
157 TestJSONOutput/start/Audit 0
159 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/pause/Command 0.7
163 TestJSONOutput/pause/Audit 0
165 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/unpause/Command 0.64
169 TestJSONOutput/unpause/Audit 0
171 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/stop/Command 7.09
175 TestJSONOutput/stop/Audit 0
177 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
179 TestErrorJSONOutput 0.19
184 TestMainNoArgs 0.04
185 TestMinikubeProfile 100.98
188 TestMountStart/serial/StartWithMountFirst 28.26
189 TestMountStart/serial/VerifyMountFirst 0.38
190 TestMountStart/serial/StartWithMountSecond 29.89
191 TestMountStart/serial/VerifyMountSecond 0.38
192 TestMountStart/serial/DeleteFirst 0.88
193 TestMountStart/serial/VerifyMountPostDelete 0.38
194 TestMountStart/serial/Stop 1.11
195 TestMountStart/serial/RestartStopped 22.27
196 TestMountStart/serial/VerifyMountPostStop 0.37
199 TestMultiNode/serial/FreshStart2Nodes 112.4
200 TestMultiNode/serial/DeployApp2Nodes 5.83
202 TestMultiNode/serial/AddNode 45.31
203 TestMultiNode/serial/ProfileList 0.21
204 TestMultiNode/serial/CopyFile 7.46
205 TestMultiNode/serial/StopNode 2.97
206 TestMultiNode/serial/StartAfterStop 31.04
208 TestMultiNode/serial/DeleteNode 1.74
210 TestMultiNode/serial/RestartMultiNode 446.27
211 TestMultiNode/serial/ValidateNameConflict 47.75
218 TestScheduledStopUnix 117.71
224 TestKubernetesUpgrade 202.15
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
231 TestNoKubernetes/serial/StartWithK8s 104.86
236 TestNetworkPlugins/group/false 2.82
240 TestNoKubernetes/serial/StartWithStopK8s 67
241 TestNoKubernetes/serial/Start 80.94
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
243 TestNoKubernetes/serial/ProfileList 0.82
244 TestNoKubernetes/serial/Stop 1.42
246 TestStoppedBinaryUpgrade/Setup 1.95
256 TestPause/serial/Start 127.02
257 TestNetworkPlugins/group/auto/Start 102.86
259 TestNetworkPlugins/group/kindnet/Start 69.69
260 TestNetworkPlugins/group/calico/Start 104.49
261 TestNetworkPlugins/group/auto/KubeletFlags 0.22
262 TestNetworkPlugins/group/auto/NetCatPod 12.39
263 TestNetworkPlugins/group/auto/DNS 0.18
264 TestNetworkPlugins/group/auto/Localhost 0.15
265 TestNetworkPlugins/group/auto/HairPin 0.15
266 TestNetworkPlugins/group/custom-flannel/Start 123.61
267 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
268 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
269 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
270 TestNetworkPlugins/group/kindnet/DNS 0.18
271 TestNetworkPlugins/group/kindnet/Localhost 0.15
272 TestNetworkPlugins/group/kindnet/HairPin 0.15
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.46
274 TestNetworkPlugins/group/enable-default-cni/Start 132.14
275 TestNetworkPlugins/group/flannel/Start 141.82
276 TestNetworkPlugins/group/calico/ControllerPod 5.03
277 TestNetworkPlugins/group/calico/KubeletFlags 0.23
278 TestNetworkPlugins/group/calico/NetCatPod 17.44
279 TestNetworkPlugins/group/calico/DNS 0.18
280 TestNetworkPlugins/group/calico/Localhost 0.16
281 TestNetworkPlugins/group/calico/HairPin 0.17
282 TestNetworkPlugins/group/bridge/Start 90.43
283 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
284 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.12
285 TestNetworkPlugins/group/custom-flannel/DNS 0.19
286 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
287 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
289 TestStartStop/group/old-k8s-version/serial/FirstStart 147.64
290 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
291 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.56
292 TestNetworkPlugins/group/flannel/ControllerPod 5.03
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
296 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
297 TestNetworkPlugins/group/flannel/NetCatPod 16.33
298 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
299 TestNetworkPlugins/group/bridge/NetCatPod 14.55
301 TestStartStop/group/no-preload/serial/FirstStart 87.95
302 TestNetworkPlugins/group/bridge/DNS 26.21
303 TestNetworkPlugins/group/flannel/DNS 0.25
304 TestNetworkPlugins/group/flannel/Localhost 0.21
305 TestNetworkPlugins/group/flannel/HairPin 0.2
307 TestStartStop/group/embed-certs/serial/FirstStart 116.99
308 TestNetworkPlugins/group/bridge/Localhost 0.17
309 TestNetworkPlugins/group/bridge/HairPin 0.18
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 117.38
312 TestStartStop/group/no-preload/serial/DeployApp 13.57
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.34
315 TestStartStop/group/old-k8s-version/serial/DeployApp 10.48
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
318 TestStartStop/group/embed-certs/serial/DeployApp 10.45
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
325 TestStartStop/group/no-preload/serial/SecondStart 696.55
327 TestStartStop/group/old-k8s-version/serial/SecondStart 706.97
329 TestStartStop/group/embed-certs/serial/SecondStart 596.22
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 560.34
341 TestStartStop/group/newest-cni/serial/FirstStart 61.17
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.9
344 TestStartStop/group/newest-cni/serial/Stop 7.1
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
346 TestStartStop/group/newest-cni/serial/SecondStart 48.71
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
350 TestStartStop/group/newest-cni/serial/Pause 2.48
x
+
TestDownloadOnly/v1.16.0/json-events (25.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-752606 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-752606 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.383921619s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (25.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-752606
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-752606: exit status 85 (57.076484ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |          |
	|         | -p download-only-752606        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:35:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:35:49.500836  339877 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:35:49.500950  339877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:35:49.500961  339877 out.go:309] Setting ErrFile to fd 2...
	I1002 10:35:49.500966  339877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:35:49.501130  339877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	W1002 10:35:49.501238  339877 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-332611/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-332611/.minikube/config/config.json: no such file or directory
	I1002 10:35:49.501898  339877 out.go:303] Setting JSON to true
	I1002 10:35:49.502914  339877 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4696,"bootTime":1696238254,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:35:49.502973  339877 start.go:138] virtualization: kvm guest
	I1002 10:35:49.505848  339877 out.go:97] [download-only-752606] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:35:49.507454  339877 out.go:169] MINIKUBE_LOCATION=17340
	W1002 10:35:49.505968  339877 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 10:35:49.506014  339877 notify.go:220] Checking for updates...
	I1002 10:35:49.510245  339877 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:35:49.511789  339877 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:35:49.513252  339877 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:35:49.514564  339877 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 10:35:49.517074  339877 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 10:35:49.517322  339877 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:35:49.549790  339877 out.go:97] Using the kvm2 driver based on user configuration
	I1002 10:35:49.549863  339877 start.go:298] selected driver: kvm2
	I1002 10:35:49.549874  339877 start.go:902] validating driver "kvm2" against <nil>
	I1002 10:35:49.550168  339877 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:35:49.550258  339877 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 10:35:49.565263  339877 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 10:35:49.565312  339877 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 10:35:49.565829  339877 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1002 10:35:49.565970  339877 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 10:35:49.566006  339877 cni.go:84] Creating CNI manager for ""
	I1002 10:35:49.566018  339877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:35:49.566027  339877 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 10:35:49.566034  339877 start_flags.go:321] config:
	{Name:download-only-752606 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-752606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:35:49.566240  339877 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:35:49.568146  339877 out.go:97] Downloading VM boot image ...
	I1002 10:35:49.568184  339877 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 10:35:59.027395  339877 out.go:97] Starting control plane node download-only-752606 in cluster download-only-752606
	I1002 10:35:59.027434  339877 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 10:35:59.134135  339877 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1002 10:35:59.134186  339877 cache.go:57] Caching tarball of preloaded images
	I1002 10:35:59.134393  339877 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 10:35:59.136595  339877 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 10:35:59.136611  339877 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:35:59.254640  339877 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1002 10:36:12.297189  339877 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:36:12.297274  339877 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:36:13.191906  339877 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1002 10:36:13.192301  339877 profile.go:148] Saving config to /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/download-only-752606/config.json ...
	I1002 10:36:13.192349  339877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/download-only-752606/config.json: {Name:mk4f0d14a4d8b64bca275df4c34ffdf3a5b12386 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 10:36:13.192528  339877 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 10:36:13.192685  339877 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-752606"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (14.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-752606 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-752606 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.909716867s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (14.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-752606
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-752606: exit status 85 (55.085156ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:35 UTC |          |
	|         | -p download-only-752606        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-752606 | jenkins | v1.31.2 | 02 Oct 23 10:36 UTC |          |
	|         | -p download-only-752606        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 10:36:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 10:36:14.942513  339967 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:36:14.942753  339967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:14.942762  339967 out.go:309] Setting ErrFile to fd 2...
	I1002 10:36:14.942767  339967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:36:14.942925  339967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	W1002 10:36:14.943040  339967 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17340-332611/.minikube/config/config.json: open /home/jenkins/minikube-integration/17340-332611/.minikube/config/config.json: no such file or directory
	I1002 10:36:14.943464  339967 out.go:303] Setting JSON to true
	I1002 10:36:14.944320  339967 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4721,"bootTime":1696238254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:36:14.944378  339967 start.go:138] virtualization: kvm guest
	I1002 10:36:14.946503  339967 out.go:97] [download-only-752606] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:36:14.948276  339967 out.go:169] MINIKUBE_LOCATION=17340
	I1002 10:36:14.946688  339967 notify.go:220] Checking for updates...
	I1002 10:36:14.951496  339967 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:36:14.952921  339967 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:36:14.954311  339967 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:36:14.955588  339967 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 10:36:14.958167  339967 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 10:36:14.958636  339967 config.go:182] Loaded profile config "download-only-752606": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1002 10:36:14.958726  339967 start.go:810] api.Load failed for download-only-752606: filestore "download-only-752606": Docker machine "download-only-752606" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 10:36:14.958802  339967 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 10:36:14.958837  339967 start.go:810] api.Load failed for download-only-752606: filestore "download-only-752606": Docker machine "download-only-752606" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 10:36:14.990839  339967 out.go:97] Using the kvm2 driver based on existing profile
	I1002 10:36:14.990864  339967 start.go:298] selected driver: kvm2
	I1002 10:36:14.990876  339967 start.go:902] validating driver "kvm2" against &{Name:download-only-752606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-752606 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:14.991346  339967 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:36:14.991453  339967 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17340-332611/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 10:36:15.006382  339967 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 10:36:15.007152  339967 cni.go:84] Creating CNI manager for ""
	I1002 10:36:15.007169  339967 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 10:36:15.007182  339967 start_flags.go:321] config:
	{Name:download-only-752606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-752606 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:36:15.007353  339967 iso.go:125] acquiring lock: {Name:mk7de30231d07df2d4c6e3bdeda8fe7f7e574116 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 10:36:15.009107  339967 out.go:97] Starting control plane node download-only-752606 in cluster download-only-752606
	I1002 10:36:15.009119  339967 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 10:36:15.128439  339967 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1002 10:36:15.128472  339967 cache.go:57] Caching tarball of preloaded images
	I1002 10:36:15.128633  339967 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 10:36:15.130646  339967 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 10:36:15.130664  339967 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1002 10:36:15.241053  339967 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:63ef340a9dae90462e676325aa502af3 -> /home/jenkins/minikube-integration/17340-332611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-752606"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-752606
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-775199 --alsologtostderr --binary-mirror http://127.0.0.1:34689 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-775199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-775199
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (135.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-091993 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-091993 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m14.030827649s)
helpers_test.go:175: Cleaning up "offline-crio-091993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-091993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-091993: (1.049729343s)
--- PASS: TestOffline (135.08s)

                                                
                                    
x
+
TestAddons/Setup (153.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p addons-304007 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p addons-304007 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m33.415411429s)
--- PASS: TestAddons/Setup (153.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 28.679718ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v682v" [511b1064-d462-426c-9606-a5290d7ea3e6] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020750599s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b2tdg" [ebe43d1f-3aef-4e43-8685-e0ac4f3285d8] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01583258s
addons_test.go:318: (dbg) Run:  kubectl --context addons-304007 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-304007 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-304007 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.58988488s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 ip
2023/10/02 10:39:24 [DEBUG] GET http://192.168.39.235:5000
addons_test.go:366: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2lbkr" [69383541-85a4-4ac6-8d0f-d6be188bca46] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.029495411s
addons_test.go:819: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-304007
addons_test.go:819: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-304007: (5.82507255s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 28.606184ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-cgzmw" [c6c6e12d-2982-4aa7-9bcb-8a6224dd0772] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017501821s
addons_test.go:393: (dbg) Run:  kubectl --context addons-304007 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.09s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.45s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 5.095456ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-npbhs" [37a2928b-d3bd-4586-9c85-bfdbba5b2c4a] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.021103392s
addons_test.go:451: (dbg) Run:  kubectl --context addons-304007 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-304007 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.489803725s)
addons_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 6.721642ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-304007 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-304007 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [772c17f0-4cb7-4551-8925-313f10a4fe5e] Pending
helpers_test.go:344: "task-pv-pod" [772c17f0-4cb7-4551-8925-313f10a4fe5e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [772c17f0-4cb7-4551-8925-313f10a4fe5e] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.028574172s
addons_test.go:562: (dbg) Run:  kubectl --context addons-304007 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-304007 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-304007 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-304007 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-304007 delete pod task-pv-pod
addons_test.go:572: (dbg) Done: kubectl --context addons-304007 delete pod task-pv-pod: (1.440569087s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-304007 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-304007 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-304007 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [20b2b294-b3f4-478d-a1b6-40e63224d580] Pending
helpers_test.go:344: "task-pv-pod-restore" [20b2b294-b3f4-478d-a1b6-40e63224d580] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [20b2b294-b3f4-478d-a1b6-40e63224d580] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.023703084s
addons_test.go:604: (dbg) Run:  kubectl --context addons-304007 delete pod task-pv-pod-restore
addons_test.go:604: (dbg) Done: kubectl --context addons-304007 delete pod task-pv-pod-restore: (1.466178115s)
addons_test.go:608: (dbg) Run:  kubectl --context addons-304007 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-304007 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-304007 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.955195605s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (77.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-304007 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-304007 --alsologtostderr -v=1: (1.420835794s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-ldbkr" [0254fb09-ea15-4286-96e8-d8faaf78ebc9] Pending
helpers_test.go:344: "headlamp-58b88cff49-ldbkr" [0254fb09-ea15-4286-96e8-d8faaf78ebc9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-ldbkr" [0254fb09-ea15-4286-96e8-d8faaf78ebc9] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-ldbkr" [0254fb09-ea15-4286-96e8-d8faaf78ebc9] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.090434395s
--- PASS: TestAddons/parallel/Headlamp (19.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-gfqdf" [8d6ff853-9c3f-4f63-af18-3660ade5334f] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010519814s
addons_test.go:838: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-304007
--- PASS: TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-304007 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-304007 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eda818e4-48b1-4499-9577-5d2b014af55e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eda818e4-48b1-4499-9577-5d2b014af55e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eda818e4-48b1-4499-9577-5d2b014af55e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.013096442s
addons_test.go:869: (dbg) Run:  kubectl --context addons-304007 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 ssh "cat /opt/local-path-provisioner/pvc-d402bdb5-3384-475e-b837-b98b15392ced_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-304007 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-304007 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-304007 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-amd64 -p addons-304007 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.427336319s)
--- PASS: TestAddons/parallel/LocalPath (62.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-304007 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-304007 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (84.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-045561 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-045561 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m22.594363903s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-045561 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-045561 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-045561 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-045561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-045561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-045561: (1.052700929s)
--- PASS: TestCertOptions (84.11s)

                                                
                                    
x
+
TestCertExpiration (478.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394393 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394393 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (3m53.525115582s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-394393 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-394393 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m4.297288746s)
helpers_test.go:175: Cleaning up "cert-expiration-394393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-394393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-394393: (1.056131453s)
--- PASS: TestCertExpiration (478.88s)

                                                
                                    
x
+
TestForceSystemdFlag (133.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-819186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-819186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m12.470781036s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-819186 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-819186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-819186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-819186: (1.018532477s)
--- PASS: TestForceSystemdFlag (133.68s)

                                                
                                    
x
+
TestForceSystemdEnv (50.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-120922 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-120922 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.543595348s)
helpers_test.go:175: Cleaning up "force-systemd-env-120922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-120922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-120922: (1.009644258s)
--- PASS: TestForceSystemdEnv (50.55s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.94s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.94s)

                                                
                                    
x
+
TestErrorSpam/setup (44.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-683430 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-683430 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-683430 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-683430 --driver=kvm2  --container-runtime=crio: (44.910800328s)
--- PASS: TestErrorSpam/setup (44.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 unpause
--- PASS: TestErrorSpam/unpause (1.67s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 stop: (2.075077877s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-683430 --log_dir /tmp/nospam-683430 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17340-332611/.minikube/files/etc/test/nested/copy/339865/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-250301 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m6.001033723s)
--- PASS: TestFunctional/serial/StartWithProxy (66.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (73.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-250301 --alsologtostderr -v=8: (1m13.045375406s)
functional_test.go:659: soft start took 1m13.046027243s for "functional-250301" cluster.
--- PASS: TestFunctional/serial/SoftStart (73.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-250301 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 cache add registry.k8s.io/pause:3.3: (1.090063338s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 cache add registry.k8s.io/pause:latest: (1.037850568s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-250301 /tmp/TestFunctionalserialCacheCmdcacheadd_local3940288252/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache add minikube-local-cache-test:functional-250301
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 cache add minikube-local-cache-test:functional-250301: (1.898110809s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache delete minikube-local-cache-test:functional-250301
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-250301
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.992201ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 kubectl -- --context functional-250301 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-250301 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 10:49:04.535698  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.541486  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.551747  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.572069  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.612400  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.692662  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:04.853178  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:05.175547  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:49:05.816537  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-250301 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.179858136s)
functional_test.go:757: restart took 37.180000524s for "functional-250301" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-250301 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 logs
E1002 10:49:07.097358  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 logs: (1.584355917s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 logs --file /tmp/TestFunctionalserialLogsFileCmd1942050265/001/logs.txt
E1002 10:49:09.657574  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 logs --file /tmp/TestFunctionalserialLogsFileCmd1942050265/001/logs.txt: (1.504320609s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-250301 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-250301
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-250301: exit status 115 (295.505815ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.69:30998 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-250301 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 config get cpus: exit status 14 (48.173797ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 config get cpus: exit status 14 (50.168139ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-250301 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-250301 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 347274: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-250301 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.36855ms)

                                                
                                                
-- stdout --
	* [functional-250301] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:49:30.887968  347056 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:49:30.888214  347056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:49:30.888224  347056 out.go:309] Setting ErrFile to fd 2...
	I1002 10:49:30.888231  347056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:49:30.888413  347056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 10:49:30.888954  347056 out.go:303] Setting JSON to false
	I1002 10:49:30.890090  347056 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5517,"bootTime":1696238254,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:49:30.890151  347056 start.go:138] virtualization: kvm guest
	I1002 10:49:30.892514  347056 out.go:177] * [functional-250301] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 10:49:30.894581  347056 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:49:30.894614  347056 notify.go:220] Checking for updates...
	I1002 10:49:30.897187  347056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:49:30.898964  347056 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:49:30.900812  347056 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:49:30.902244  347056 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 10:49:30.903599  347056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:49:30.905432  347056 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 10:49:30.905852  347056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:49:30.905908  347056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:49:30.927518  347056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I1002 10:49:30.927977  347056 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:49:30.928524  347056 main.go:141] libmachine: Using API Version  1
	I1002 10:49:30.928548  347056 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:49:30.928949  347056 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:49:30.929181  347056 main.go:141] libmachine: (functional-250301) Calling .DriverName
	I1002 10:49:30.929455  347056 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:49:30.929866  347056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:49:30.929915  347056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:49:30.944616  347056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1002 10:49:30.945091  347056 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:49:30.945613  347056 main.go:141] libmachine: Using API Version  1
	I1002 10:49:30.945637  347056 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:49:30.945978  347056 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:49:30.946165  347056 main.go:141] libmachine: (functional-250301) Calling .DriverName
	I1002 10:49:30.978392  347056 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 10:49:30.979826  347056 start.go:298] selected driver: kvm2
	I1002 10:49:30.979845  347056 start.go:902] validating driver "kvm2" against &{Name:functional-250301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-250301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.69 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:49:30.979996  347056 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:49:30.982467  347056 out.go:177] 
	W1002 10:49:30.983914  347056 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 10:49:30.985238  347056 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-250301 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-250301 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.374004ms)

                                                
                                                
-- stdout --
	* [functional-250301] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 10:49:31.164582  347110 out.go:296] Setting OutFile to fd 1 ...
	I1002 10:49:31.164865  347110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:49:31.164875  347110 out.go:309] Setting ErrFile to fd 2...
	I1002 10:49:31.164879  347110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 10:49:31.165166  347110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 10:49:31.165672  347110 out.go:303] Setting JSON to false
	I1002 10:49:31.166702  347110 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5517,"bootTime":1696238254,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 10:49:31.166760  347110 start.go:138] virtualization: kvm guest
	I1002 10:49:31.169731  347110 out.go:177] * [functional-250301] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1002 10:49:31.171314  347110 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 10:49:31.172729  347110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 10:49:31.171336  347110 notify.go:220] Checking for updates...
	I1002 10:49:31.175283  347110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 10:49:31.176653  347110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 10:49:31.178057  347110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 10:49:31.179297  347110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 10:49:31.181095  347110 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 10:49:31.181516  347110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:49:31.181577  347110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:49:31.196933  347110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36173
	I1002 10:49:31.197374  347110 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:49:31.197959  347110 main.go:141] libmachine: Using API Version  1
	I1002 10:49:31.197991  347110 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:49:31.198410  347110 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:49:31.198597  347110 main.go:141] libmachine: (functional-250301) Calling .DriverName
	I1002 10:49:31.198849  347110 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 10:49:31.199161  347110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 10:49:31.199196  347110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 10:49:31.214006  347110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I1002 10:49:31.214441  347110 main.go:141] libmachine: () Calling .GetVersion
	I1002 10:49:31.214952  347110 main.go:141] libmachine: Using API Version  1
	I1002 10:49:31.214985  347110 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 10:49:31.215370  347110 main.go:141] libmachine: () Calling .GetMachineName
	I1002 10:49:31.215587  347110 main.go:141] libmachine: (functional-250301) Calling .DriverName
	I1002 10:49:31.246960  347110 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 10:49:31.248203  347110 start.go:298] selected driver: kvm2
	I1002 10:49:31.248224  347110 start.go:902] validating driver "kvm2" against &{Name:functional-250301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-250301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.69 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 10:49:31.248359  347110 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 10:49:31.250612  347110 out.go:177] 
	W1002 10:49:31.251915  347110 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 10:49:31.253352  347110 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-250301 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-250301 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-72hx5" [07a018a0-14fc-4ac0-83f7-6c6338d1e26c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-72hx5" [07a018a0-14fc-4ac0-83f7-6c6338d1e26c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.018967188s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.69:31367
functional_test.go:1674: http://192.168.39.69:31367: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-72hx5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.69:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.69:31367
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (60.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9c5607dc-d90c-4f1e-8492-ed011f60666b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012350536s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-250301 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-250301 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-250301 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-250301 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-250301 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92fe7f11-0e5b-44f5-ad05-2d18956fefba] Pending
helpers_test.go:344: "sp-pod" [92fe7f11-0e5b-44f5-ad05-2d18956fefba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92fe7f11-0e5b-44f5-ad05-2d18956fefba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.051916733s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-250301 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-250301 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-250301 delete -f testdata/storage-provisioner/pod.yaml: (1.216164759s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-250301 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [010fdc38-8e85-41a0-8332-10b85d44da56] Pending
helpers_test.go:344: "sp-pod" [010fdc38-8e85-41a0-8332-10b85d44da56] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [010fdc38-8e85-41a0-8332-10b85d44da56] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.029279083s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-250301 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (60.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh -n functional-250301 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 cp functional-250301:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1788057788/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh -n functional-250301 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-250301 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ls9vp" [8f7037c5-1b06-49a4-ae4b-e5a2c65125b0] Pending
helpers_test.go:344: "mysql-859648c796-ls9vp" [8f7037c5-1b06-49a4-ae4b-e5a2c65125b0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ls9vp" [8f7037c5-1b06-49a4-ae4b-e5a2c65125b0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.036736586s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-250301 exec mysql-859648c796-ls9vp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-250301 exec mysql-859648c796-ls9vp -- mysql -ppassword -e "show databases;": exit status 1 (194.365042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-250301 exec mysql-859648c796-ls9vp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/339865/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /etc/test/nested/copy/339865/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/339865.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /etc/ssl/certs/339865.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/339865.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /usr/share/ca-certificates/339865.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3398652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /etc/ssl/certs/3398652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3398652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /usr/share/ca-certificates/3398652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-250301 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "sudo systemctl is-active docker": exit status 1 (213.027548ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "sudo systemctl is-active containerd": exit status 1 (210.307146ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-250301 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-250301 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bmbbw" [6509a553-f836-456f-ac47-8d10b8461e6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bmbbw" [6509a553-f836-456f-ac47-8d10b8461e6b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.042386676s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 version -o=json --components
E1002 10:49:45.499542  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-250301 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-250301
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-250301
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-250301 image ls --format short --alsologtostderr:
I1002 10:49:45.857338  348052 out.go:296] Setting OutFile to fd 1 ...
I1002 10:49:45.857467  348052 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:45.857477  348052 out.go:309] Setting ErrFile to fd 2...
I1002 10:49:45.857484  348052 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:45.857718  348052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
I1002 10:49:45.859123  348052 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:45.859517  348052 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:45.859952  348052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:45.859996  348052 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:45.875335  348052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
I1002 10:49:45.875845  348052 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:45.876468  348052 main.go:141] libmachine: Using API Version  1
I1002 10:49:45.876498  348052 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:45.876951  348052 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:45.877182  348052 main.go:141] libmachine: (functional-250301) Calling .GetState
I1002 10:49:45.879395  348052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:45.879457  348052 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:45.894690  348052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
I1002 10:49:45.895142  348052 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:45.895660  348052 main.go:141] libmachine: Using API Version  1
I1002 10:49:45.895685  348052 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:45.896096  348052 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:45.896344  348052 main.go:141] libmachine: (functional-250301) Calling .DriverName
I1002 10:49:45.896580  348052 ssh_runner.go:195] Run: systemctl --version
I1002 10:49:45.896614  348052 main.go:141] libmachine: (functional-250301) Calling .GetSSHHostname
I1002 10:49:45.899402  348052 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:45.899839  348052 main.go:141] libmachine: (functional-250301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:42:21", ip: ""} in network mk-functional-250301: {Iface:virbr1 ExpiryTime:2023-10-02 11:46:19 +0000 UTC Type:0 Mac:52:54:00:4e:42:21 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-250301 Clientid:01:52:54:00:4e:42:21}
I1002 10:49:45.899897  348052 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined IP address 192.168.39.69 and MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:45.900060  348052 main.go:141] libmachine: (functional-250301) Calling .GetSSHPort
I1002 10:49:45.900279  348052 main.go:141] libmachine: (functional-250301) Calling .GetSSHKeyPath
I1002 10:49:45.900465  348052 main.go:141] libmachine: (functional-250301) Calling .GetSSHUsername
I1002 10:49:45.900634  348052 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/functional-250301/id_rsa Username:docker}
I1002 10:49:46.029842  348052 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 10:49:46.101725  348052 main.go:141] libmachine: Making call to close driver server
I1002 10:49:46.101748  348052 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:46.102020  348052 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:46.102036  348052 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:46.102050  348052 main.go:141] libmachine: Making call to close driver server
I1002 10:49:46.102059  348052 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:46.102324  348052 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:46.102343  348052 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-250301 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 7a5d9d67a13f6 | 61.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | cdcab12b2dd16 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-250301  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-250301  | ea25b55da961d | 3.35kB |
| registry.k8s.io/kube-proxy              | v1.28.2            | c120fed2beb84 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 55f13c92defb1 | 123MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-250301  | 4e5b35c57c52a | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/library/nginx                 | latest             | 61395b4c586da | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-250301 image ls --format table --alsologtostderr:
I1002 10:49:51.556328  348213 out.go:296] Setting OutFile to fd 1 ...
I1002 10:49:51.556590  348213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:51.556601  348213 out.go:309] Setting ErrFile to fd 2...
I1002 10:49:51.556606  348213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:51.556817  348213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
I1002 10:49:51.557414  348213 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:51.557533  348213 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:51.557925  348213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:51.558001  348213 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:51.573551  348213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
I1002 10:49:51.574096  348213 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:51.574773  348213 main.go:141] libmachine: Using API Version  1
I1002 10:49:51.574794  348213 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:51.575139  348213 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:51.575419  348213 main.go:141] libmachine: (functional-250301) Calling .GetState
I1002 10:49:51.577183  348213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:51.577227  348213 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:51.591460  348213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
I1002 10:49:51.591877  348213 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:51.592339  348213 main.go:141] libmachine: Using API Version  1
I1002 10:49:51.592361  348213 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:51.592692  348213 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:51.592879  348213 main.go:141] libmachine: (functional-250301) Calling .DriverName
I1002 10:49:51.593079  348213 ssh_runner.go:195] Run: systemctl --version
I1002 10:49:51.593104  348213 main.go:141] libmachine: (functional-250301) Calling .GetSSHHostname
I1002 10:49:51.595817  348213 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:51.596217  348213 main.go:141] libmachine: (functional-250301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:42:21", ip: ""} in network mk-functional-250301: {Iface:virbr1 ExpiryTime:2023-10-02 11:46:19 +0000 UTC Type:0 Mac:52:54:00:4e:42:21 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-250301 Clientid:01:52:54:00:4e:42:21}
I1002 10:49:51.596251  348213 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined IP address 192.168.39.69 and MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:51.596352  348213 main.go:141] libmachine: (functional-250301) Calling .GetSSHPort
I1002 10:49:51.596501  348213 main.go:141] libmachine: (functional-250301) Calling .GetSSHKeyPath
I1002 10:49:51.596613  348213 main.go:141] libmachine: (functional-250301) Calling .GetSSHUsername
I1002 10:49:51.596754  348213 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/functional-250301/id_rsa Username:docker}
I1002 10:49:51.684811  348213 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 10:49:51.729985  348213 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.730002  348213 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.730305  348213 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.730323  348213 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:51.730333  348213 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.730344  348213 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.730633  348213 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.730650  348213 main.go:141] libmachine: Making call to close connection to plugin binary
2023/10/02 10:49:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-250301 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry
.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820094"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-250301"],"size":"34114467"
},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4e5b35c57c52a2098b305c49731c35fdb65ba5d87eca27f241cbc8cd1f7d0c9f","repoDigests":["localhost/my-image@sha256:f82765717e356f2d3d81bc61afa7dc7b6dc3423539e05758a29584bd350f1461"],"repoTags":["localhost/my-image:functional-250301"],"size":"1468600"},{"id":"55f13c92
defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4","registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"123171638"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4a
a9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"61485878"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d44984
1ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9f83e50ca58b837bd18c7e306f878e0120192761d19af5453c9c0d38322354c1","repoDigests":["docker.io/library/b4a6387376d31f0643c7ff3c25851a6f2c3247f69589a21b2257758eb79cae6b-tmp@sha256:84180875351e532914ec23603235dfdb6a0d3cf4d0752dfabf1d37079bc3419e"],"repoTags":[],"size":"1466018"},{"id":"ea25b55da961d30c4b448013b9ea355bed009bfc90a84e5834cb11679c1d1c8b","repoDigests":["localhost/minikube-local-cache-test@sha256:2094208d5f4dd246f1929723df16cf39daacbf980c35e68a8426fb9478b3d428"],"repoTags":["localhost/minikube-local-cache-test:functional-250301"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags
":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"127149008"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:
v1.28.2"],"size":"74687895"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-250301 image ls --format json --alsologtostderr:
I1002 10:49:51.341722  348189 out.go:296] Setting OutFile to fd 1 ...
I1002 10:49:51.341828  348189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:51.341835  348189 out.go:309] Setting ErrFile to fd 2...
I1002 10:49:51.341840  348189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:51.341996  348189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
I1002 10:49:51.342548  348189 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:51.342655  348189 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:51.343018  348189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:51.343069  348189 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:51.357686  348189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
I1002 10:49:51.358158  348189 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:51.359106  348189 main.go:141] libmachine: Using API Version  1
I1002 10:49:51.359143  348189 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:51.360043  348189 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:51.360271  348189 main.go:141] libmachine: (functional-250301) Calling .GetState
I1002 10:49:51.361931  348189 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:51.361970  348189 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:51.376221  348189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
I1002 10:49:51.376593  348189 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:51.377025  348189 main.go:141] libmachine: Using API Version  1
I1002 10:49:51.377048  348189 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:51.377354  348189 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:51.377524  348189 main.go:141] libmachine: (functional-250301) Calling .DriverName
I1002 10:49:51.377713  348189 ssh_runner.go:195] Run: systemctl --version
I1002 10:49:51.377737  348189 main.go:141] libmachine: (functional-250301) Calling .GetSSHHostname
I1002 10:49:51.380126  348189 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:51.380513  348189 main.go:141] libmachine: (functional-250301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:42:21", ip: ""} in network mk-functional-250301: {Iface:virbr1 ExpiryTime:2023-10-02 11:46:19 +0000 UTC Type:0 Mac:52:54:00:4e:42:21 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-250301 Clientid:01:52:54:00:4e:42:21}
I1002 10:49:51.380538  348189 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined IP address 192.168.39.69 and MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:51.380633  348189 main.go:141] libmachine: (functional-250301) Calling .GetSSHPort
I1002 10:49:51.380784  348189 main.go:141] libmachine: (functional-250301) Calling .GetSSHKeyPath
I1002 10:49:51.380962  348189 main.go:141] libmachine: (functional-250301) Calling .GetSSHUsername
I1002 10:49:51.381100  348189 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/functional-250301/id_rsa Username:docker}
I1002 10:49:51.469026  348189 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 10:49:51.513128  348189 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.513142  348189 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.513430  348189 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.513452  348189 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:51.513448  348189 main.go:141] libmachine: (functional-250301) DBG | Closing plugin on server side
I1002 10:49:51.513468  348189 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.513478  348189 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.513724  348189 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.513755  348189 main.go:141] libmachine: (functional-250301) DBG | Closing plugin on server side
I1002 10:49:51.513771  348189 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-250301 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-250301
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "74687895"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "127149008"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "61485878"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75
repoTags:
- docker.io/library/nginx:latest
size: "190820094"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
- registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "123171638"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ea25b55da961d30c4b448013b9ea355bed009bfc90a84e5834cb11679c1d1c8b
repoDigests:
- localhost/minikube-local-cache-test@sha256:2094208d5f4dd246f1929723df16cf39daacbf980c35e68a8426fb9478b3d428
repoTags:
- localhost/minikube-local-cache-test:functional-250301
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-250301 image ls --format yaml --alsologtostderr:
I1002 10:49:46.149085  348075 out.go:296] Setting OutFile to fd 1 ...
I1002 10:49:46.149633  348075 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:46.149651  348075 out.go:309] Setting ErrFile to fd 2...
I1002 10:49:46.149657  348075 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:46.149922  348075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
I1002 10:49:46.150626  348075 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:46.150744  348075 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:46.151164  348075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:46.151221  348075 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:46.165781  348075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
I1002 10:49:46.166233  348075 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:46.166789  348075 main.go:141] libmachine: Using API Version  1
I1002 10:49:46.166813  348075 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:46.167203  348075 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:46.167413  348075 main.go:141] libmachine: (functional-250301) Calling .GetState
I1002 10:49:46.169331  348075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:46.169370  348075 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:46.183916  348075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
I1002 10:49:46.184414  348075 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:46.185011  348075 main.go:141] libmachine: Using API Version  1
I1002 10:49:46.185042  348075 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:46.185387  348075 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:46.185574  348075 main.go:141] libmachine: (functional-250301) Calling .DriverName
I1002 10:49:46.185798  348075 ssh_runner.go:195] Run: systemctl --version
I1002 10:49:46.185824  348075 main.go:141] libmachine: (functional-250301) Calling .GetSSHHostname
I1002 10:49:46.188969  348075 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:46.189416  348075 main.go:141] libmachine: (functional-250301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:42:21", ip: ""} in network mk-functional-250301: {Iface:virbr1 ExpiryTime:2023-10-02 11:46:19 +0000 UTC Type:0 Mac:52:54:00:4e:42:21 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-250301 Clientid:01:52:54:00:4e:42:21}
I1002 10:49:46.189441  348075 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined IP address 192.168.39.69 and MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:46.189603  348075 main.go:141] libmachine: (functional-250301) Calling .GetSSHPort
I1002 10:49:46.189815  348075 main.go:141] libmachine: (functional-250301) Calling .GetSSHKeyPath
I1002 10:49:46.189984  348075 main.go:141] libmachine: (functional-250301) Calling .GetSSHUsername
I1002 10:49:46.190123  348075 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/functional-250301/id_rsa Username:docker}
I1002 10:49:46.284733  348075 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 10:49:46.339518  348075 main.go:141] libmachine: Making call to close driver server
I1002 10:49:46.339537  348075 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:46.339910  348075 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:46.339956  348075 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:46.339961  348075 main.go:141] libmachine: (functional-250301) DBG | Closing plugin on server side
I1002 10:49:46.339975  348075 main.go:141] libmachine: Making call to close driver server
I1002 10:49:46.339987  348075 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:46.340288  348075 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:46.340308  348075 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:46.340328  348075 main.go:141] libmachine: (functional-250301) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh pgrep buildkitd: exit status 1 (193.918939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image build -t localhost/my-image:functional-250301 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image build -t localhost/my-image:functional-250301 testdata/build --alsologtostderr: (4.546754176s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-250301 image build -t localhost/my-image:functional-250301 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9f83e50ca58
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-250301
--> 4e5b35c57c5
Successfully tagged localhost/my-image:functional-250301
4e5b35c57c52a2098b305c49731c35fdb65ba5d87eca27f241cbc8cd1f7d0c9f
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-250301 image build -t localhost/my-image:functional-250301 testdata/build --alsologtostderr:
I1002 10:49:46.579123  348130 out.go:296] Setting OutFile to fd 1 ...
I1002 10:49:46.579405  348130 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:46.579415  348130 out.go:309] Setting ErrFile to fd 2...
I1002 10:49:46.579421  348130 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 10:49:46.579591  348130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
I1002 10:49:46.580125  348130 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:46.580602  348130 config.go:182] Loaded profile config "functional-250301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 10:49:46.580978  348130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:46.581024  348130 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:46.596332  348130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
I1002 10:49:46.596778  348130 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:46.597288  348130 main.go:141] libmachine: Using API Version  1
I1002 10:49:46.597309  348130 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:46.597644  348130 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:46.597843  348130 main.go:141] libmachine: (functional-250301) Calling .GetState
I1002 10:49:46.599689  348130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 10:49:46.599735  348130 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 10:49:46.613718  348130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
I1002 10:49:46.614125  348130 main.go:141] libmachine: () Calling .GetVersion
I1002 10:49:46.614579  348130 main.go:141] libmachine: Using API Version  1
I1002 10:49:46.614598  348130 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 10:49:46.614930  348130 main.go:141] libmachine: () Calling .GetMachineName
I1002 10:49:46.615143  348130 main.go:141] libmachine: (functional-250301) Calling .DriverName
I1002 10:49:46.615328  348130 ssh_runner.go:195] Run: systemctl --version
I1002 10:49:46.615358  348130 main.go:141] libmachine: (functional-250301) Calling .GetSSHHostname
I1002 10:49:46.617862  348130 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:46.618226  348130 main.go:141] libmachine: (functional-250301) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:42:21", ip: ""} in network mk-functional-250301: {Iface:virbr1 ExpiryTime:2023-10-02 11:46:19 +0000 UTC Type:0 Mac:52:54:00:4e:42:21 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-250301 Clientid:01:52:54:00:4e:42:21}
I1002 10:49:46.618267  348130 main.go:141] libmachine: (functional-250301) DBG | domain functional-250301 has defined IP address 192.168.39.69 and MAC address 52:54:00:4e:42:21 in network mk-functional-250301
I1002 10:49:46.618374  348130 main.go:141] libmachine: (functional-250301) Calling .GetSSHPort
I1002 10:49:46.618552  348130 main.go:141] libmachine: (functional-250301) Calling .GetSSHKeyPath
I1002 10:49:46.618724  348130 main.go:141] libmachine: (functional-250301) Calling .GetSSHUsername
I1002 10:49:46.618878  348130 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/functional-250301/id_rsa Username:docker}
I1002 10:49:46.704948  348130 build_images.go:151] Building image from path: /tmp/build.1159047250.tar
I1002 10:49:46.705067  348130 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 10:49:46.718233  348130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1159047250.tar
I1002 10:49:46.722779  348130 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1159047250.tar: stat -c "%s %y" /var/lib/minikube/build/build.1159047250.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1159047250.tar': No such file or directory
I1002 10:49:46.722805  348130 ssh_runner.go:362] scp /tmp/build.1159047250.tar --> /var/lib/minikube/build/build.1159047250.tar (3072 bytes)
I1002 10:49:46.746470  348130 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1159047250
I1002 10:49:46.757222  348130 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1159047250 -xf /var/lib/minikube/build/build.1159047250.tar
I1002 10:49:46.765880  348130 crio.go:297] Building image: /var/lib/minikube/build/build.1159047250
I1002 10:49:46.765945  348130 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-250301 /var/lib/minikube/build/build.1159047250 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 10:49:51.040841  348130 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-250301 /var/lib/minikube/build/build.1159047250 --cgroup-manager=cgroupfs: (4.274865962s)
I1002 10:49:51.040924  348130 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1159047250
I1002 10:49:51.062478  348130 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1159047250.tar
I1002 10:49:51.080087  348130 build_images.go:207] Built localhost/my-image:functional-250301 from /tmp/build.1159047250.tar
I1002 10:49:51.080134  348130 build_images.go:123] succeeded building to: functional-250301
I1002 10:49:51.080140  348130 build_images.go:124] failed building to: 
I1002 10:49:51.080192  348130 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.080228  348130 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.080571  348130 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.080596  348130 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:51.080608  348130 main.go:141] libmachine: Making call to close driver server
I1002 10:49:51.080619  348130 main.go:141] libmachine: (functional-250301) Calling .Close
I1002 10:49:51.080910  348130 main.go:141] libmachine: Successfully made call to close driver server
I1002 10:49:51.080935  348130 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 10:49:51.080989  348130 main.go:141] libmachine: (functional-250301) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.967288571s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-250301
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr: (3.736873919s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr: (2.294479841s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E1002 10:49:25.019288  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.167060832s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-250301
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image load --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr: (7.251685088s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service list -o json
functional_test.go:1493: Took "395.248858ms" to run "out/minikube-linux-amd64 -p functional-250301 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.69:30119
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.69:30119
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "295.791128ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "43.964581ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "316.731826ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "47.306035ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdany-port438990441/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696243768938782495" to /tmp/TestFunctionalparallelMountCmdany-port438990441/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696243768938782495" to /tmp/TestFunctionalparallelMountCmdany-port438990441/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696243768938782495" to /tmp/TestFunctionalparallelMountCmdany-port438990441/001/test-1696243768938782495
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.860085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 10:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 10:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 10:49 test-1696243768938782495
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh cat /mount-9p/test-1696243768938782495
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-250301 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [049ff0c4-d955-44b4-8f57-8db7ac03cb3e] Pending
helpers_test.go:344: "busybox-mount" [049ff0c4-d955-44b4-8f57-8db7ac03cb3e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [049ff0c4-d955-44b4-8f57-8db7ac03cb3e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [049ff0c4-d955-44b4-8f57-8db7ac03cb3e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.042239429s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-250301 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdany-port438990441/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image save gcr.io/google-containers/addon-resizer:functional-250301 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image save gcr.io/google-containers/addon-resizer:functional-250301 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.584791137s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image rm gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.124441164s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-250301
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 image save --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-250301 image save --daemon gcr.io/google-containers/addon-resizer:functional-250301 --alsologtostderr: (1.337455823s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-250301
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdspecific-port3359303260/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.612822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdspecific-port3359303260/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "sudo umount -f /mount-9p": exit status 1 (245.081801ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-250301 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdspecific-port3359303260/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T" /mount1: exit status 1 (261.481471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-250301 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-250301 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-250301 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1931714109/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-250301
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-250301
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-250301
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-982656 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1002 10:50:26.459727  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-982656 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.283236192s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons enable ingress --alsologtostderr -v=5
E1002 10:51:48.380148  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons enable ingress --alsologtostderr -v=5: (17.470078714s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-982656 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (109.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-580058 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1002 10:54:55.622725  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:55:36.583744  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-580058 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m49.958536922s)
--- PASS: TestJSONOutput/start/Command (109.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-580058 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-580058 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-580058 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-580058 --output=json --user=testUser: (7.092239828s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-895970 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-895970 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.630275ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"473d8e94-3120-4097-bfbe-8a5c570c303a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-895970] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26711526-5548-43e0-b978-b837752a9f3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"b3768cb1-8689-4d7f-a390-79a8fef05108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3dc1eb5a-6104-453f-9cc4-54ff2b9c4733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig"}}
	{"specversion":"1.0","id":"49f767c5-8228-4515-81ca-3ea5c8d40255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube"}}
	{"specversion":"1.0","id":"ce0929ad-4ec8-4be4-8dc9-fbd3d12bb92c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a4e7a778-49dc-4bc8-beaf-42a336fbb79f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3325ba70-96c5-4c80-90ca-13c676ce9db8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-895970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-895970
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-066752 --driver=kvm2  --container-runtime=crio
E1002 10:56:55.305699  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.311031  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.321306  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.341613  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.381931  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.462282  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.622752  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:55.943352  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:56.584256  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:57.865367  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:56:58.504115  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 10:57:00.426418  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:57:05.547257  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:57:15.787458  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-066752 --driver=kvm2  --container-runtime=crio: (46.230918597s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-069917 --driver=kvm2  --container-runtime=crio
E1002 10:57:36.267763  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:58:17.228868  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-069917 --driver=kvm2  --container-runtime=crio: (52.133389786s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-066752
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-069917
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-069917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-069917
helpers_test.go:175: Cleaning up "first-066752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-066752
--- PASS: TestMinikubeProfile (100.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-442328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-442328 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.255616229s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-442328 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-442328 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-461003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1002 10:59:04.538625  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 10:59:14.659909  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-461003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.889008637s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-442328 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.11s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-461003
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-461003: (1.107866635s)
--- PASS: TestMountStart/serial/Stop (1.11s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-461003
E1002 10:59:39.152072  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 10:59:42.345797  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-461003: (21.273951325s)
--- PASS: TestMountStart/serial/RestartStopped (22.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461003 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224116 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224116 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.970052947s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-224116 -- rollout status deployment/busybox: (4.178579814s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-h45vs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224116 -- exec busybox-5bc68d56bd-jjswt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-224116 -v 3 --alsologtostderr
E1002 11:02:22.993254  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-224116 -v 3 --alsologtostderr: (44.725153299s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.31s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp testdata/cp-test.txt multinode-224116:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638977480/001/cp-test_multinode-224116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116:/home/docker/cp-test.txt multinode-224116-m02:/home/docker/cp-test_multinode-224116_multinode-224116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test_multinode-224116_multinode-224116-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116:/home/docker/cp-test.txt multinode-224116-m03:/home/docker/cp-test_multinode-224116_multinode-224116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test_multinode-224116_multinode-224116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp testdata/cp-test.txt multinode-224116-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638977480/001/cp-test_multinode-224116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt multinode-224116:/home/docker/cp-test_multinode-224116-m02_multinode-224116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test_multinode-224116-m02_multinode-224116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m02:/home/docker/cp-test.txt multinode-224116-m03:/home/docker/cp-test_multinode-224116-m02_multinode-224116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test_multinode-224116-m02_multinode-224116-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp testdata/cp-test.txt multinode-224116-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638977480/001/cp-test_multinode-224116-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt multinode-224116:/home/docker/cp-test_multinode-224116-m03_multinode-224116.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116 "sudo cat /home/docker/cp-test_multinode-224116-m03_multinode-224116.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 cp multinode-224116-m03:/home/docker/cp-test.txt multinode-224116-m02:/home/docker/cp-test_multinode-224116-m03_multinode-224116-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 ssh -n multinode-224116-m02 "sudo cat /home/docker/cp-test_multinode-224116-m03_multinode-224116-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-224116 node stop m03: (2.079840517s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224116 status: exit status 7 (440.444464ms)

                                                
                                                
-- stdout --
	multinode-224116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224116-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224116-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr: exit status 7 (449.040111ms)

                                                
                                                
-- stdout --
	multinode-224116
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224116-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224116-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:02:51.621371  355222 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:02:51.621519  355222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:02:51.621531  355222 out.go:309] Setting ErrFile to fd 2...
	I1002 11:02:51.621539  355222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:02:51.621754  355222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:02:51.621942  355222 out.go:303] Setting JSON to false
	I1002 11:02:51.621984  355222 mustload.go:65] Loading cluster: multinode-224116
	I1002 11:02:51.622088  355222 notify.go:220] Checking for updates...
	I1002 11:02:51.622541  355222 config.go:182] Loaded profile config "multinode-224116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:02:51.622562  355222 status.go:255] checking status of multinode-224116 ...
	I1002 11:02:51.623054  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.623113  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.643013  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1002 11:02:51.643461  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.644072  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.644104  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.644494  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.644698  355222 main.go:141] libmachine: (multinode-224116) Calling .GetState
	I1002 11:02:51.646466  355222 status.go:330] multinode-224116 host status = "Running" (err=<nil>)
	I1002 11:02:51.646487  355222 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:02:51.646773  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.646819  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.661852  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I1002 11:02:51.662241  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.662822  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.662846  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.663202  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.663379  355222 main.go:141] libmachine: (multinode-224116) Calling .GetIP
	I1002 11:02:51.666591  355222 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:02:51.667042  355222 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:02:51.667076  355222 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:02:51.667209  355222 host.go:66] Checking if "multinode-224116" exists ...
	I1002 11:02:51.667558  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.667602  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.683119  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I1002 11:02:51.683547  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.684022  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.684045  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.684356  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.684551  355222 main.go:141] libmachine: (multinode-224116) Calling .DriverName
	I1002 11:02:51.684737  355222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 11:02:51.684764  355222 main.go:141] libmachine: (multinode-224116) Calling .GetSSHHostname
	I1002 11:02:51.687785  355222 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:02:51.688286  355222 main.go:141] libmachine: (multinode-224116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:8e:87", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:00:10 +0000 UTC Type:0 Mac:52:54:00:85:8e:87 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-224116 Clientid:01:52:54:00:85:8e:87}
	I1002 11:02:51.688327  355222 main.go:141] libmachine: (multinode-224116) DBG | domain multinode-224116 has defined IP address 192.168.39.165 and MAC address 52:54:00:85:8e:87 in network mk-multinode-224116
	I1002 11:02:51.688431  355222 main.go:141] libmachine: (multinode-224116) Calling .GetSSHPort
	I1002 11:02:51.688623  355222 main.go:141] libmachine: (multinode-224116) Calling .GetSSHKeyPath
	I1002 11:02:51.688810  355222 main.go:141] libmachine: (multinode-224116) Calling .GetSSHUsername
	I1002 11:02:51.688960  355222 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116/id_rsa Username:docker}
	I1002 11:02:51.784597  355222 ssh_runner.go:195] Run: systemctl --version
	I1002 11:02:51.790293  355222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:02:51.803403  355222 kubeconfig.go:92] found "multinode-224116" server: "https://192.168.39.165:8443"
	I1002 11:02:51.803430  355222 api_server.go:166] Checking apiserver status ...
	I1002 11:02:51.803466  355222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 11:02:51.818787  355222 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	I1002 11:02:51.829478  355222 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod11cc08b65180f58db5ea8ca677f3032f/crio-413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed"
	I1002 11:02:51.829566  355222 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod11cc08b65180f58db5ea8ca677f3032f/crio-413ab1884fa2bacfef9474822763080550ab6858a7c54e110d8fdb0a80cb54ed/freezer.state
	I1002 11:02:51.839515  355222 api_server.go:204] freezer state: "THAWED"
	I1002 11:02:51.839554  355222 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1002 11:02:51.845405  355222 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1002 11:02:51.845435  355222 status.go:421] multinode-224116 apiserver status = Running (err=<nil>)
	I1002 11:02:51.845445  355222 status.go:257] multinode-224116 status: &{Name:multinode-224116 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 11:02:51.845466  355222 status.go:255] checking status of multinode-224116-m02 ...
	I1002 11:02:51.845785  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.845822  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.861297  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I1002 11:02:51.862566  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.863040  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.863059  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.863385  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.863568  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetState
	I1002 11:02:51.865128  355222 status.go:330] multinode-224116-m02 host status = "Running" (err=<nil>)
	I1002 11:02:51.865151  355222 host.go:66] Checking if "multinode-224116-m02" exists ...
	I1002 11:02:51.865442  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.865491  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.880416  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41819
	I1002 11:02:51.880836  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.881301  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.881326  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.881683  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.881878  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetIP
	I1002 11:02:51.884426  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:02:51.884841  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:02:51.884883  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:02:51.885027  355222 host.go:66] Checking if "multinode-224116-m02" exists ...
	I1002 11:02:51.885376  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:51.885424  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:51.900506  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I1002 11:02:51.900940  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:51.901502  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:51.901544  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:51.901870  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:51.902093  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .DriverName
	I1002 11:02:51.902284  355222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 11:02:51.902305  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHHostname
	I1002 11:02:51.904996  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:02:51.905455  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:06:6c", ip: ""} in network mk-multinode-224116: {Iface:virbr1 ExpiryTime:2023-10-02 12:01:17 +0000 UTC Type:0 Mac:52:54:00:5a:06:6c Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-224116-m02 Clientid:01:52:54:00:5a:06:6c}
	I1002 11:02:51.905497  355222 main.go:141] libmachine: (multinode-224116-m02) DBG | domain multinode-224116-m02 has defined IP address 192.168.39.135 and MAC address 52:54:00:5a:06:6c in network mk-multinode-224116
	I1002 11:02:51.905660  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHPort
	I1002 11:02:51.905821  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHKeyPath
	I1002 11:02:51.905970  355222 main.go:141] libmachine: (multinode-224116-m02) Calling .GetSSHUsername
	I1002 11:02:51.906065  355222 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17340-332611/.minikube/machines/multinode-224116-m02/id_rsa Username:docker}
	I1002 11:02:51.997706  355222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 11:02:52.009814  355222 status.go:257] multinode-224116-m02 status: &{Name:multinode-224116-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 11:02:52.009866  355222 status.go:255] checking status of multinode-224116-m03 ...
	I1002 11:02:52.010204  355222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 11:02:52.010261  355222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 11:02:52.026860  355222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I1002 11:02:52.027337  355222 main.go:141] libmachine: () Calling .GetVersion
	I1002 11:02:52.027798  355222 main.go:141] libmachine: Using API Version  1
	I1002 11:02:52.027819  355222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 11:02:52.028164  355222 main.go:141] libmachine: () Calling .GetMachineName
	I1002 11:02:52.028389  355222 main.go:141] libmachine: (multinode-224116-m03) Calling .GetState
	I1002 11:02:52.029855  355222 status.go:330] multinode-224116-m03 host status = "Stopped" (err=<nil>)
	I1002 11:02:52.029868  355222 status.go:343] host is not running, skipping remaining checks
	I1002 11:02:52.029883  355222 status.go:257] multinode-224116-m03 status: &{Name:multinode-224116-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-224116 node start m03 --alsologtostderr: (30.399054437s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-224116 node delete m03: (1.197387206s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (446.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224116 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1002 11:19:04.537192  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:19:14.660375  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:21:55.305927  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:22:07.584569  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:24:04.538999  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:24:14.660447  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224116 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m25.731502761s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224116 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (446.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224116
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224116-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-224116-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.04261ms)

                                                
                                                
-- stdout --
	* [multinode-224116-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-224116-m02' is duplicated with machine name 'multinode-224116-m02' in profile 'multinode-224116'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224116-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224116-m03 --driver=kvm2  --container-runtime=crio: (46.41736824s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-224116
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-224116: exit status 80 (229.868793ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-224116
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-224116-m03 already exists in multinode-224116-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-224116-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-224116-m03: (1.005529416s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                    
x
+
TestScheduledStopUnix (117.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-508143 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-508143 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.140845734s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-508143 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-508143 -n scheduled-stop-508143
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-508143 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-508143 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-508143 -n scheduled-stop-508143
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-508143
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-508143 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 11:31:55.306418  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-508143
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-508143: exit status 7 (62.376398ms)

                                                
                                                
-- stdout --
	scheduled-stop-508143
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-508143 -n scheduled-stop-508143
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-508143 -n scheduled-stop-508143: exit status 7 (60.58058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-508143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-508143
--- PASS: TestScheduledStopUnix (117.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (202.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.769923486s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-613769
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-613769: (7.120260372s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-613769 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-613769 status --format={{.Host}}: exit status 7 (83.64503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.512276878s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-613769 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.058097ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-613769] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-613769
	    minikube start -p kubernetes-upgrade-613769 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6137692 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-613769 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1002 11:38:47.585666  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:39:04.535107  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-613769 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.396786097s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-613769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-613769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-613769: (1.096247657s)
--- PASS: TestKubernetesUpgrade (202.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.019252ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-083017] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083017 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083017 --driver=kvm2  --container-runtime=crio: (1m44.605634566s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-083017 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-124285 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-124285 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.008033ms)

                                                
                                                
-- stdout --
	* [false-124285] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17340
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 11:32:12.795812  363510 out.go:296] Setting OutFile to fd 1 ...
	I1002 11:32:12.796187  363510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:32:12.796238  363510 out.go:309] Setting ErrFile to fd 2...
	I1002 11:32:12.796257  363510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 11:32:12.796819  363510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17340-332611/.minikube/bin
	I1002 11:32:12.797834  363510 out.go:303] Setting JSON to false
	I1002 11:32:12.798921  363510 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8079,"bootTime":1696238254,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 11:32:12.798983  363510 start.go:138] virtualization: kvm guest
	I1002 11:32:12.801481  363510 out.go:177] * [false-124285] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 11:32:12.803075  363510 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 11:32:12.803100  363510 notify.go:220] Checking for updates...
	I1002 11:32:12.804642  363510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 11:32:12.806405  363510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17340-332611/kubeconfig
	I1002 11:32:12.808291  363510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17340-332611/.minikube
	I1002 11:32:12.809778  363510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 11:32:12.811218  363510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 11:32:12.813225  363510 config.go:182] Loaded profile config "NoKubernetes-083017": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:32:12.813347  363510 config.go:182] Loaded profile config "force-systemd-env-120922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:32:12.813434  363510 config.go:182] Loaded profile config "offline-crio-091993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 11:32:12.813538  363510 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 11:32:12.849714  363510 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 11:32:12.851384  363510 start.go:298] selected driver: kvm2
	I1002 11:32:12.851400  363510 start.go:902] validating driver "kvm2" against <nil>
	I1002 11:32:12.851418  363510 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 11:32:12.853826  363510 out.go:177] 
	W1002 11:32:12.855387  363510 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 11:32:12.856930  363510 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-124285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-124285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124285"

                                                
                                                
----------------------- debugLogs end: false-124285 [took: 2.584281795s] --------------------------------
helpers_test.go:175: Cleaning up "false-124285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-124285
--- PASS: TestNetworkPlugins/group/false (2.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1002 11:34:04.536601  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:34:14.660143  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m5.731814785s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-083017 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-083017 status -o json: exit status 2 (229.475378ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-083017","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-083017
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-083017: (1.039419677s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (80.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083017 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m20.940757598s)
--- PASS: TestNoKubernetes/serial/Start (80.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-083017 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-083017 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.075755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-083017
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-083017: (1.423083257s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.95s)

                                                
                                    
x
+
TestPause/serial/Start (127.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-892275 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-892275 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m7.022126568s)
--- PASS: TestPause/serial/Start (127.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1002 11:39:14.660053  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.857009208s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.693665671s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m44.485913587s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9b2q4" [a1bdfc42-1c41-49e1-acf1-df412a31a220] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9b2q4" [a1bdfc42-1c41-49e1-acf1-df412a31a220] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.014099054s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (123.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m3.605901464s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (123.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5f58x" [11b28f17-1166-43c4-bcb8-6df14c9bc178] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.032184636s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-thdks" [a51f9111-f4ad-4490-96be-73dc7b267836] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-thdks" [a51f9111-f4ad-4490-96be-73dc7b267836] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.010650131s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-204505
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m12.13945004s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (141.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m21.816820821s)
--- PASS: TestNetworkPlugins/group/flannel/Start (141.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4pbbp" [43a08aa5-3457-41a6-8f63-46a7a278175c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.025955144s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (17.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fjw2k" [0b5d43d3-53a0-4217-baae-7733c156140b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fjw2k" [0b5d43d3-53a0-4217-baae-7733c156140b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 17.012736444s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (17.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-124285 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.429502003s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-124285 replace --force -f testdata/netcat-deployment.yaml: (1.017108177s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6dm9g" [3ec4a5d0-cccf-4b8d-86be-1728fb69820d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6dm9g" [3ec4a5d0-cccf-4b8d-86be-1728fb69820d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.024592723s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-749860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1002 11:44:04.534929  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-749860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m27.643386652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-124285 "pgrep -a kubelet"
E1002 11:44:14.660422  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t468g" [3e0ee516-7694-4247-b676-d742ee670e5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t468g" [3e0ee516-7694-4247-b676-d742ee670e5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.012341627s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-54fqh" [52b4ebfc-004c-47ff-a499-ed2557117395] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.024693114s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r2s56" [f5320c33-ba49-40fb-b893-c7c522477432] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r2s56" [f5320c33-ba49-40fb-b893-c7c522477432] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.012252199s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-124285 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-124285 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hkzkc" [1e94dbca-6821-489d-85de-fa5f411c8776] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hkzkc" [1e94dbca-6821-489d-85de-fa5f411c8776] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.033423703s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-304121 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-304121 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m27.947195778s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-124285 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-124285 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.238551097s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-124285 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-124285 exec deployment/netcat -- nslookup kubernetes.default: (10.197136059s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-124285 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (116.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-487027 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-487027 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m56.994794708s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (116.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-124285 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-777999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 11:45:54.840675  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:54.846017  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:54.856415  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:54.876793  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:54.917160  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:54.997595  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:55.158028  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:55.478723  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:56.119612  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:57.399914  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:45:59.960438  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:46:05.081370  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-777999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m57.382966761s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-304121 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eedc8993-548b-4cff-ae7d-186b8f03dfe1] Pending
E1002 11:46:15.322171  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
helpers_test.go:344: "busybox" [eedc8993-548b-4cff-ae7d-186b8f03dfe1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eedc8993-548b-4cff-ae7d-186b8f03dfe1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.032866521s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-304121 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-304121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-304121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.241938798s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-304121 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-749860 create -f testdata/busybox.yaml
E1002 11:46:30.760205  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8dd75497-e5c2-4ec4-9ddc-011ff1c3ea24] Pending
E1002 11:46:31.401436  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8dd75497-e5c2-4ec4-9ddc-011ff1c3ea24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 11:46:32.681819  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8dd75497-e5c2-4ec4-9ddc-011ff1c3ea24] Running
E1002 11:46:35.242072  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:46:35.802914  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:46:38.359584  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:46:40.362857  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.034200053s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-749860 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-749860 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-749860 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-487027 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7e46ead-de44-4525-9f15-f20ac226cffd] Pending
helpers_test.go:344: "busybox" [e7e46ead-de44-4525-9f15-f20ac226cffd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7e46ead-de44-4525-9f15-f20ac226cffd] Running
E1002 11:47:11.084224  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.027655791s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-487027 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-487027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-487027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119451126s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-487027 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e7f8435-3c92-447f-ad2c-c3e7da52e094] Pending
helpers_test.go:344: "busybox" [7e7f8435-3c92-447f-ad2c-c3e7da52e094] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 11:47:32.242891  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7e7f8435-3c92-447f-ad2c-c3e7da52e094] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.020550953s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-777999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-777999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029453076s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-777999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (696.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-304121 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-304121 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (11m36.271241392s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-304121 -n no-preload-304121
E1002 12:00:37.708869  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (696.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (706.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-749860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1002 11:49:14.660123  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:49:15.317425  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.322735  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.333013  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.353356  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.393763  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.474146  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.634605  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:15.955687  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:49:16.596299  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-749860 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m46.704534288s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-749860 -n old-k8s-version-749860
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (706.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (596.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-487027 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 11:49:52.440912  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:49:54.579580  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:49:56.280038  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-487027 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (9m55.947545155s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-487027 -n embed-certs-487027
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (596.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (560.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-777999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 11:50:15.060431  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:50:37.240538  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:50:48.812527  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:50:54.840296  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:50:56.021546  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:51:14.361927  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:51:22.526049  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:51:30.122758  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:51:55.306172  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:51:57.806088  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:51:59.160800  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:52:10.733126  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:52:17.942483  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:52:22.001512  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:52:49.687654  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:53:30.518575  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:53:58.202159  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:54:04.535567  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:54:14.659447  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:54:15.318228  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:54:26.888604  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:54:34.099457  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:54:43.001719  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:54:54.574009  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 11:55:01.783581  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
E1002 11:55:27.586136  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:55:54.840596  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/auto-124285/client.crt: no such file or directory
E1002 11:56:30.122947  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/kindnet-124285/client.crt: no such file or directory
E1002 11:56:55.305604  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/ingress-addon-legacy-982656/client.crt: no such file or directory
E1002 11:57:22.001979  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/calico-124285/client.crt: no such file or directory
E1002 11:58:30.519048  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/custom-flannel-124285/client.crt: no such file or directory
E1002 11:59:04.535207  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
E1002 11:59:14.659838  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 11:59:15.317181  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 11:59:26.889063  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-777999 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (9m20.049339264s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-777999 -n default-k8s-diff-port-777999
E1002 11:59:34.099763  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (560.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-929075 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 12:14:14.660199  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/functional-250301/client.crt: no such file or directory
E1002 12:14:15.318006  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/enable-default-cni-124285/client.crt: no such file or directory
E1002 12:14:26.888928  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/flannel-124285/client.crt: no such file or directory
E1002 12:14:34.099377  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/bridge-124285/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-929075 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m1.1744625s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-929075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-929075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.89864384s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-929075 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-929075 --alsologtostderr -v=3: (7.10106439s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929075 -n newest-cni-929075
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929075 -n newest-cni-929075: exit status 7 (57.06479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-929075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-929075 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-929075 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (48.360654293s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-929075 -n newest-cni-929075
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-929075 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-929075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929075 -n newest-cni-929075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929075 -n newest-cni-929075: exit status 2 (237.701042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929075 -n newest-cni-929075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929075 -n newest-cni-929075: exit status 2 (237.677112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-929075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-929075 -n newest-cni-929075
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-929075 -n newest-cni-929075
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    

Test skip (36/288)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.2/cached-images 0
13 TestDownloadOnly/v1.28.2/binaries 0
14 TestDownloadOnly/v1.28.2/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
40 TestDockerFlags 0
43 TestDockerEnvContainerd 0
45 TestHyperKitDriverInstallOrUpdate 0
46 TestHyperkitDriverSkipUpgrade 0
97 TestFunctional/parallel/DockerEnv 0
98 TestFunctional/parallel/PodmanEnv 0
106 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
108 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
146 TestGvisorAddon 0
147 TestImageBuild 0
180 TestKicCustomNetwork 0
181 TestKicExistingNetwork 0
182 TestKicCustomSubnet 0
183 TestKicStaticIP 0
214 TestChangeNoneUser 0
217 TestScheduledStopWindows 0
219 TestSkaffold 0
221 TestInsufficientStorage 0
225 TestMissingContainerUpgrade 0
230 TestNetworkPlugins/group/kubenet 2.69
239 TestNetworkPlugins/group/cilium 3.03
253 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
E1002 10:49:14.778120  339865 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17340-332611/.minikube/profiles/addons-304007/client.crt: no such file or directory
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-124285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-124285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124285"

                                                
                                                
----------------------- debugLogs end: kubenet-124285 [took: 2.55375635s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-124285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-124285
--- SKIP: TestNetworkPlugins/group/kubenet (2.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-124285 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-124285" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-124285

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-124285" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124285"

                                                
                                                
----------------------- debugLogs end: cilium-124285 [took: 2.89763727s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-124285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-124285
--- SKIP: TestNetworkPlugins/group/cilium (3.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-448198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-448198
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard